Tuesday, June 10, 2008

Regression Testing

Testing conducted for the purpose of evaluating whether or not a change to the system (all CM items) has introduced a new failure. Regression testing is often accomplished through the construction, execution and analysis of product and system tests.

Regression testing is an expensive but necessary activity performed on modified software to provide confidence that changes are correct and do not adversely affect other system components. Four things can happen when a developer attempts to fix a bug. Three of these things are bad, and one is good:

Because of the high probability that one of the bad outcomes will result from a change to the system, it is necessary to do regression testing.

It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Most industrial testing is done via test suites; automated sets of procedures designed to exercise all parts of a program and to show defects. While the original suite could be used to test the modified software, this might be very time-consuming. A regression test selection technique chooses, from an existing test set, the tests that are deemed necessary to validate modified software.

There are three main groups of test selection approaches in use:

Minimization approaches seek to satisfy structural coverage criteria by identifying a minimal set of tests that must be rerun.

Coverage approaches are also based on coverage criteria, but do not require minimization of the test set. Instead, they seek to select all tests that exercise changed or affected program components.

Safe attempt instead to select every test that will cause the modified program to produce different output than original program.

An interesting approach to limiting test cases is based on whether we can confine testing to the "vicinity" of the change. (Ex. If I put a new radio in my car, do I have to do a complete road test to make sure the change was successful?) A new breed of regression test theory tries to identify, through program flows or reverse engineering, where boundaries can be placed around modules and subsystems. These graphs can determine which tests from the existing suite may exhibit changed behavior on the new version.

Regression testing has been receiving more attention as corporations focus on fixing the 'Year 2000 Bug'. The goal of most Y2K is to correct the date handling portions of their system without changing any other behavior. A new 'Y2K' version of the system is compared against a baseline original system. With the obvious exception of date formats, the performance of the two versions should be identical. This means not only do they do the same things correctly, they also do the same things incorrectly. A non-Y2K bug in the original software should not have been fixed by the Y2K work.

A frequently asked question about regression testing is 'The developer says this problem is fixed. Why do I need to re-test?’ to which the answer is 'The same person probably told you it worked in the first place'





Software Testing Training


Software testing institute


corporate training software testing



For More Visit Site
http://www.crestechsoftware.com/

For discussion FORUM
http://www.crestechsoftware.com/forum

No comments: