Username: Save?
Password:
Home Forum Links Search Login Register*
    News: Keep The TechnoWorldInc.com Community Clean: Read Guidelines Here.
Recent Updates
[April 24, 2024, 11:48:22 AM]

[April 24, 2024, 11:48:22 AM]

[April 24, 2024, 11:48:22 AM]

[April 24, 2024, 11:48:22 AM]

[April 03, 2024, 06:11:00 PM]

[April 03, 2024, 06:11:00 PM]

[April 03, 2024, 06:11:00 PM]

[April 03, 2024, 06:11:00 PM]

[March 06, 2024, 02:45:27 PM]

[March 06, 2024, 02:45:27 PM]

[March 06, 2024, 02:45:27 PM]

[March 06, 2024, 02:45:27 PM]

[February 14, 2024, 02:00:39 PM]
Subscriptions
Get Latest Tech Updates For Free!
Resources
   Travelikers
   Funistan
   PrettyGalz
   Techlap
   FreeThemes
   Videsta
   Glamistan
   BachatMela
   GlamGalz
   Techzug
   Vidsage
   Funzug
   WorldHostInc
   Funfani
   FilmyMama
   Uploaded.Tech
   MegaPixelShop
   Netens
   Funotic
   FreeJobsInc
   FilesPark
Participate in the fastest growing Technical Encyclopedia! This website is 100% Free. Please register or login using the login box above if you have already registered. You will need to be logged in to reply, make new topics and to access all the areas. Registration is free! Click Here To Register.
+ Techno World Inc - The Best Technical Encyclopedia Online! » Forum » THE TECHNO CLUB [ TECHNOWORLDINC.COM ] » Computer / Technical Issues » Miscellaneous
 SOFTWARE TESTING
Pages: [1]   Go Down
  Print  
Author Topic: SOFTWARE TESTING  (Read 1306 times)
Taruna
Elite Member
*****



Karma: 13
Offline Offline

Posts: 845

Hi ALL


View Profile
SOFTWARE TESTING
« Posted: January 04, 2007, 01:38:53 AM »


SOFTWARE TESTING


Introduction

Software development styles have changed a lot of times over the past few decades catering to the needs of the era, which they represented. With increasing pressures on time and money, the concept of component based software development originated. In this method, the software project is outsourced to other development organizations and finally, the third party components (Commercial off the shelf or COTS) are integrated to form a software system. A software component is defined as "a unit of composition with contractually specified interfaces and explicit context dependencies only. A software component can be deployed independently and is subject to composition by third parties". A challenge towards efficient component development is that their granularity and mutual dependencies have to be controlled right from the early stages of the development life cycle. One of the greatest problems with the component technology is fault isolation of individual components in the system and coming up with an efficient test strategies for the integrated modules that use these third party components. Software components enable practical reuse of software parts and amortization of investments over multiple applications. Each part or component is well defined and independently deployable. Composition is the key technique by which systems of software components are constructed .

Some of the component characteristics which are relevant during their testing:

* Component Observability: This defines the ease with which a component can be observed in terms of its operational behaviors, input parameters and outputs. The design and definition of a component interface thus plays a major role in determining the component?s observability.

* Component Traceability: It is the capacity of the component to track the status of its attributes and behavior. The former is called behavior traceability where the component facilitates the tracking of its internal and external behaviors and the latter is called Trace controllability which is the ability of the component to facilitate the customization of its tracking functions.

* Component Controllability: This shows the controlling ease on a component?s inputs/outputs, operations and behaviors.

* Component Understandability: This shows how much component information is provided and how well it is presented.

Testing Software Components

When to Test a Component

One of the first issues in testing software components is whether all that effort is required in the first place or not. When is it ideal to test a component in a system? If it is seen that the results of the component not working is greater than the efforts to test it, then plans should be made to test such a malfunctioning component .

Which components to test

When risk classification of the use cases is mapped onto components, we find that not all components need to be tested to the same coverage level .

* Reusable Components - Components intended for reuse should be tested over a wider range of values.

* Domain Components - Components that represent significant domain concepts should be tested both for correctness and for the faithfulness of the representation.

* Commercial Components - Components that will be sold as individual products should be tested not only as reusable components but also as potential sources of liability.

The Ultimate Goal of Testing

Testing a software component is basically done to resolve the following issues:

* Check whether the component meets its specification and fulfill its functional requirements.

* Check whether the correct and complete structural and interaction requirements, specified before the development of the component, are reflected in the implemented software system.

Problems in Testing Software Components

The focus now shifts to the most important problem of component software technology i.e. the problem of coming up with efficiently testing strategies for component integrated software systems.

Building Reusable Component Tests

Current software development teams use an ad-hoc approach to create component test suites. Also it is difficult to come up with a uniform and consistent test suite technology to cater to the different requirements like different information formats,repository technologies, database schema and test access interfaces of the test tools for testing such diverse software components. With increasing use of software components, the tests used for these components should also be reused . Development of systematic tools and methods are required to set up these reusable test suites and to organize, manage and store various component test resources like test data and test scripts.

Constructing testable components

The definition of an ideal s
oftware component says that the component is not only executable and deployable, but it is also testable using a standard set of component test facilities. Designing such components becomes difficult because such components should have specialized and well defined test architecture model and built-in test interfaces to support their interactions to the component test suites and test-beds.

Building a Generic and Reusable Test Bed

There is a lot of difficulty of developing a testing tool or a test bed technology that is capable to test the system, which has components that use more than one implementation languages and technologies.

Construct Component Test Drivers and Stubs

The traditional way of constructing test drivers and test stubs is to create them such that they work for a specific project.But with the advent of the component world and systems using reusable third party components, such traditional constructions will not work. This is because they are inefficient to cope with the diverse software components and their customizable functions.

The Great Divide

One of the first ways to look at the different issues in component testing is to divide the component domain into the component producer and the component user or consumer. Both these parties have different knowledge, understanding and visibility of the component. The component developer has the whole source code of the component whereas the component user frequently looks for more information to effectively evaluate, analyze,deploy, test and customize the component.

Testing for the component producer becomes extremely complicated because of very varied applicability domain of the component. The more reusable a component is, the wider will be its range of applicability. Therefore the testing needs to be done in a context independent manner. It is also called the Framework Design Problem which is to abstract the acquired domain knowledge to engineer plug-compatible components for new applications and test them effectively. Assumptions are made to get around this problem of not knowing the future execution context of the component. These assumptions since not very explicit and methodological, lead to cause architectural mismatch for COTS component users . This is more a methodological issue than a technical issue. Finally, the component producer should build in mechanisms in the component so that the faults related to the components in the user application can be revealed in an easy way.

From the component user?s perspective the biggest problem is the absence of source code for testing the component in the system. Any of the traditional testing techniques like the data flow testing, control-dependence calculations, or alias analysis techniques require the source code of the software system under test. The second issue is that even if the source code of the component is available, the component and the user application have chances that they are both implemented in different languages. Finally, in order to obtain the highest test coverage, the component user should be able to identify the precise portion of component functionality to be used in the application, which is again a difficult task. The Adequacy Criterion of a test suite as defined in will not be met in circumstances where such identification is not done prior to testing.

System Testing versus Unit Testing

Finally, it is worth mentioning that, unlike the traditional software systems, any extent of unit testing on the part of the component producer will not really help in deciding the final working of the same component in the user?s system . Mostly this is because of the variability of the user?s application domain and lack of foresight on the part of the component producer about the working of the component with different functional customizations. At the system level, important interactions between the components have to be considered. Therefore a need to develop a very strong system integration test plan on the part of the component user is absolutely necessary. Integration into the system, only by considering the individual component reliability provided by the component producer, is not enough.

Additional issues with System Testing are Redundant Testing and Fault-tolerant Testing . In Redundant testing, the test adequacy criteria during the unit testing of components again get used at the system level testing of the same components. A lot of time is wasted in testing the same things over and over. In Fault Tolerance Testing, the fault handling code (usually written with the component) rarely gets executed in the test-bed as the faults rarely get triggered. This is also called Optimistic Inaccuracy. Since the ability of the system to perform well depends on its effective handling of fault tolerance, ways have to be developed so that fault tolerant code in the component is always tested.

Logged

« Reply #1 Posted: January 04, 2007, 01:39:17 AM »
Taruna
Elite Member
*****



Karma: 13
Offline Offline

Posts: 845

Hi ALL


View Profile
Re: SOFTWARE TESTING
« Reply #1 Posted: January 04, 2007, 01:39:17 AM »

There is a number of testing methods and testing techniques, serving multiple purposes in different life cycle phases.Classified by purpose, software testing can be divided into: correctness testing, performance testing, reliability testing and security testing. Classified by life-cycle phase, software testing can be classified into the following categories: requirements phase testing, design phase testing, program phase testing, evaluating test results, installation phase testing,acceptance testing and maintenance testing. By scope, software testing can be categorized as follows: unit testing,component testing, integration testing, and system testing.

Correctness testing

Correctness is the minimum requirement of software, the essential purpose of testing. Correctness testing will need some type of oracle, to tell the right behavior from the wrong one. The tester may or may not know the inside details of the software module under test, e.g. control flow, data flow, etc. Therefore, either a white-box point of view or black-box point of view can be taken in testing software. We must note that the black-box and white-box ideas are not limited in correctness testing only.

Black-box testing

The black-box approach is a testing method in which test data are derived from the specified functional requirements without regard to the final program structure. It is also termed data-driven, input/output driven , or requirements-based testing. Because only the functionality of the software module is of concern, black-box testing also mainly refers to functional testing a testing method emphasized on executing the functions and examination of their input and output data. The tester treats the software under test as a black box only the inputs, outputs and specification are visible, and the functionality is determined by observing the outputs to corresponding inputs. In testing, various inputs are exercised and the outputs are compared against specification to validate the correctness. All test cases are derived from the specification. No implementation details of the code are considered.

It is obvious that the more we have covered in the input space, the more problems we will find and therefore we will be more confident about the quality of the software. Ideally we would be tempted to exhaustively test the input space. But as stated above, exhaustively testing the combinations of valid inputs will be impossible for most of the programs, let alone considering invalid inputs, timing, sequence, and resource variables. Combinatorial explosion is the major roadblock in functional testing.To make things worse, we can never be sure whether the specification is either correct or complete. Due to limitations of the language used in the specifications (usually natural language), ambiguity is often inevitable. Even if we use some type of formal or restricted language, we may still fail to write down all the possible cases in the specification. Sometimes, the specification itself becomes an intractable problem: it is not possible to specify precisely every situation that can be encountered using limited words. And people can seldom specify clearly what they want they usually can tell whether a prototype is, or is not, what they want after they have been finished. Specification problems contributes approximately 30 percent of all bugs in software.

The research in black-box testing mainly focuses on how to maximize the effectiveness of testing with minimum cost, usually the number of test cases. It is not possible to exhaust the input space, but it is possible to exhaustively test a subset of the input space. Partitioning is one of the common techniques. If we have partitioned the input space and assume all the input values in a partition is equivalent, then we only need to test one representative value in each partition to sufficiently cover the whole input space. Domain testing partitions the input domain into regions, and consider the input values in each domain an equivalent class. Domains can be exhaustively tested and covered by selecting a representative value(s) in each domain. Boundary values are of special interest. Experience shows that test cases that explore boundary conditions have a higher payoff than test cases that do not. Boundary value analysis requires one or more boundary values selected as representative test cases. The difficulties with domain testing are that incorrect domain definitions in the specification can not be efficiently discovered.

Good partitioning requires knowledge of the software structure. A good testing plan will not only contain black-box testing, but also white-box approaches, and combinations of the two.

White-box testing

Contrary to black-box testing, software is viewed as a white-box, or glass-box in white-box testing, as the structure and flow of the software under test are visible to the tester. Testing plans are made according to the details of the software implementation, such as programming language, logic, and styles. Test cases are derived from the program structure. White-box testing is also called glass-box testing, logic-driven testing or design-based testing .

There are many techniques available in white-box testing, because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. The intention of exhausting some aspect of the software is still strong in white-box testing, and some degree of exhaustion can be achieved, such as executing each line of code at least once (statement coverage), traverse every branch statements (branch coverage), or cover all the possible combinations of true and false condition predicates (Multiple condition coverage).

Control-flow testing, loop testing, and data-flow testing, all maps the corresponding flow structure of the software into a directed graph. Test cases are carefully selected based on the
criterion that all the nodes or paths are covered or traversed at least once. By doing so we may discover unnecessary "dead" code. Code that is of no use, or never get executed at all,which can not be discovered by functional testing.

In mutation testing, the original program code is perturbed and many mutated programs are created, each contains one fault.Each faulty version of the program is called a mutant. Test data are selected based on the effectiveness of failing the mutants. The more mutants a test case can kill, the better the test case is considered. The problem with mutation testing is that it is too computationally expensive to use. The boundary between black-box approach and white-box approach is not clear-cut. Many testing strategies mentioned above, may not be safely classified into black-box testing or white-box testing. It is also true for transaction-flow testing, syntax testing, finite-state testing, and many other testing strategies . One reason is that all the above techniques will need some knowledge of the specification of the software under test. Another reason is that the idea of specification itself is broad -- it may contain any requirement including the structure, programming language, and programming style as part of the specification content.

We may be reluctant to consider random testing as a testing technique. The test case selection is simple and straightforward: they are randomly chosen. Study in indicates that random testing is more cost effective for many programs. Some very subtle errors can be discovered with low cost. And it is also not inferior in coverage than other carefully designed testing techniques. One can also obtain reliability estimate using random testing results based on operational profiles. Effectively combining random testing with other testing techniques may yield more powerful and cost-effective testing strategies.

Performance testing

Not all software systems have specifications on performance explicitly. But every system will have implicit performance requirements. The software should not take infinite time or infinite resource to execute. "Performance bugs" sometimes are used to refer to those design problems in software that cause the system performance to degrade.

Performance has always been a great concern and a driving force of computer evolution. Performance evaluation of a software system usually includes: resource usage, throughput, stimulus-response time and queue lengths detailing the average or maximum number of tasks waiting to be serviced by selected resources. Typical resources that need to be considered include network bandwidth requirements, CPU cycles, disk space, disk access operations, and memory usage. The goal of performance testing can be performance bottleneck identification, performance comparison and evaluation, etc. The typical method of doing performance testing is using a benchmark -- a program, workload or trace designed to be representative of the typical system usage.

Reliability testing

Software reliability refers to the probability of failure-free operation of a system. It is related to many aspects of software,including the testing process. Directly estimating software reliability by quantifying its related factors can be difficult. Testing is an effective sampling method to measure software reliability. Guided by the operational profile, software testing (usually black-box testing) can be used to obtain failure data, and an estimation model can be further used to analyze the data to estimate the present reliability and predict future reliability. Therefore, based on the estimation, the developers can decide whether to release the software, and the users can decide whether to adopt and use the software. Risk of using software can also be assessed based on reliability information. advocates that the primary goal of testing should be to measure the dependability of tested software.

There is agreement on the intuitive meaning of dependable software: it does not fail in unexpected or catastrophic ways. Robustness testing and stress testing are variances of reliability testing based on this simple criterion.

The robustness of a software component is the degree to which it can function correctly in the presence of exceptional inputs or stressful environmental conditions. Robustness testing differs with correctness testing in the sense that the functional correctness of the software is not of concern. It only watches for robustness problems such as machine crashes, process hangs or abnormal termination. The oracle is relatively simple, therefore robustness testing can be made more portable and scalable than correctness testing. This research has drawn more and more interests recently, most of which uses commercial operating systems as their target, such as the work in Stress testing, or load testing, is often used to test the whole system rather than the software alone. In such tests the software or system are exercised with or beyond the specified limits. Typical stress includes resource exhaustion, bursts of activities, and sustained high loads.

Security testing

Software quality, reliability and security are tightly coupled. Flaws in software can be exploited by intruders to open security holes. With the development of the Internet, software security problems are becoming even more severe.

Many critical software applications and services have integrated security measures against malicious attacks. The purpose of security testing of these systems include identifying and removing software flaws that may potentially lead to security violations, and validating the effectiveness of security measures. Simulated security attacks can be performed to find vulnerabilities.
Logged
« Reply #2 Posted: January 04, 2007, 01:39:42 AM »
Taruna
Elite Member
*****



Karma: 13
Offline Offline

Posts: 845

Hi ALL


View Profile
Re: SOFTWARE TESTING
« Reply #2 Posted: January 04, 2007, 01:39:42 AM »

Testing occurs at every stage of system construction.The larger a piece of code is when defects are detected,the harder and more expensive it is to find and correct the defects. The different levels of testing reflect that testing, in the general sense, is not a single phase of the software lifecycle. It is a set of activities performed throughout the entire software lifecycle. The activities after Implementation are normally the only ones associated with testing. Software testing must be considered before implementation, as is suggested by the input into the testing activities.

The following paragraphs describe the testing activities from the second half of the software lifecycle.

Unit Testing

Unit testing exercises a unit in isolation from the rest of the system. A unit is typically a function or small collection of functions (libraries, classes), implemented by a single developer.

The main characteristic that distinguishes a unit is that it is small enough to test thoroughly, if not exhaustively. Developers are normally responsible for the testing of their own units and these are normally white box tests. The small size of units allows a high level of code coverage. It is also easier to locate and remove bugs at this level of testing.

Integration Testing

One of the most difficult aspects of software development is the integration and testing of large, untested sub-systems. The integrated system frequently fails in significant and mysterious ways, and it is difficult to fix it Integration testing exercises several units that have been combined to form a module, subsystem, or system. Integration testing focuses on the interfaces between units, to make sure the units work together.The nature of this phase is certainly white box , as we must have a certain knowledge of the units to recognize if we have been successful in fusing them together in the module.

There are three main approaches to integration testing: top-down, bottom-up and big bang . Top-down combines, tests, and debugs top-level routines that become the test harness or scaffolding for lower-level units. Bottom-up combines and tests low-level units into progressively larger modules and subsystems. Big bang testing is, unfortunately, the prevalent integration test method . This is waiting for all the module units to be complete before trying them out together. Integration tests can rely heavily on stubs or drivers. Stubs stand-in for finished subroutines or sub-systems. A stub might consist of a function header with no body, or it may read and return test data from a file, return hard-coded values, or obtain data from the tester. Stub creation can be a time consuming piece of testing. The cost of drivers and stubs in the top-down and bottom-up testing methods is what drives the use of big bang testing. This approach waits for all the modules to be constructed and tested independently, and when they are finished, they are integrated all at once. While this approach is very quick, it frequently reveals more defects than the other methods. These errors have to be fixed and as we have seen, errors that are found later take longer to fix. In addition, like bottom up, there is really nothing that can be demonstrated until later in the process.

External Function Testing

The external function test is a black box test to verify the system correctly implements specified functions. This phase is sometimes known as an alpha test. Testers will run tests that they believe reflect the end use of the system.

System Testing

The system test is a more robust version of the external test, and can be known as an alpha test. The essential difference between system and external function testing i
s the test platform. In system testing, the platform must be as close to production use in the customers? environment, including factors such as hardware setup and database size and complexity. By replicating the target environment, we can more accurately test softer system features (performance, security and fault-tolerance).

Because of the similarities between the test suites in the external function and system test phases, a project may leave one of them out. It may be too expensive to replicate the user environment for the system test, or we may not have enough time to run both.

Acceptance Testing

An acceptance (or beta) test is an exercise of a completed system by a group of end users to determine whether the system is ready for deployment. Here the system will receive more realistic testing that in the system test phase, as the users have a better idea how the system will be used than the system testers.

Regression Testing

Regression testing is an expensive but necessary activity performed on modified software to provide confidence that changes are correct and do not adversely affect other system components. Four things can happen when a developer attempts to fix a bug. Three of these things are bad, and one is good.Because of the high probability that one of the bad outcomes will result from a change to the system, it is necessary to do regression testing. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Most industrial testing is done via test suites; automated sets of procedures designed to exercise all parts of a program and to show defects. While the original suite could be used to test the modified software, this might be very time-consuming. A regression test selection technique chooses, from an existing test set,the tests that are deemed necessary to validate modified software.

There are three main groups of test selection approaches in use:

* Minimization approaches seek to satisfy structural coverage criteria by identifying a minimal set of tests that must be rerun.

* Coverage approaches are also based on coverage criteria, but do not require minimization of the test set. Instead, they seek to select all tests that exercise changed or affected program components.

* Safe attempt instead to select every test that will cause the modified program to produce different output than original program.

An interesting approach to limiting test cases is based on whether we can confine testing to the "vicinity" of the change. (Ex. If I put a new radio in my car, do I have to do a complete road test to make sure the change was successful?) A new breed of regression test theory tries to identify, through program flows or reverse engineering, where boundaries can be placed around modules and subsystems. These graphs can determine which tests from the existing suite may exhibit changed behavior on the new version.

Regression testing has been receiving more attention as corporations focus on fixing the Year 2000 Bug . The goal of most Y2K is to correct the date handling portions of their system without changing any other behavior. A new Y2K version of the system is compared against a baseline original system. With the obvious exception of date formats, the performance of the two versions should be identical. This means not only do they do the same things correctly, they also do the same things incorrectly. A non-Y2K bug in the original software should not have been fixed by the Y2K work.


Installation Testing

The testing of full, partial, or upgrade install/uninstall processes.
Logged
Pages: [1]   Go Up
  Print  
 
Jump to:  

Copyright © 2006-2023 TechnoWorldInc.com. All Rights Reserved. Privacy Policy | Disclaimer
Page created in 0.185 seconds with 24 queries.