An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply: The outcome of this test influences the design of the next test.
The plainest definition of exploratory testing is test design and test execution at the same time. This is the opposite of scripted testing (predefined test procedures, whether manual or automated). Exploratory tests, unlike scripted tests, are not defined in advance and carried out precisely according to plan. This may sound like a straightforward distinction, but in practice it's murky. That's because "defined" is a spectrum. Even an otherwise elaborately defined test procedure will leave many interesting details (such as how quickly to type on the keyboard, or what kinds of behavior to recognize as a failure) to the discretion of the tester. Likewise, even a free-form exploratory test session will involve tacit constraints or mandates about what parts of the product to test, or what strategies to use. A good exploratory tester will write down test ideas and use them in later test cycles. Such notes sometimes look a lot like test scripts, even if they aren't. Exploratory testing is sometimes confused with "ad hoc" testing. Ad hoc testing normally refers to a process of improvised, impromptu bug searching. By definition, anyone can do ad hoc testing
What kinds of specifics affect ET? Here are some of them:
1. the mission of the test project
2. the mission of this particular test session
3. the role of the tester
4. the tester (skills, talents, and preferences)
5. available tools and facilities
6. available time
7. available test data and materials
8. available help from other people
9. accountability requirements
10. what the tester‘s clients care about
11. the current testing strategy
12. the status of other testing efforts on the same product
13. the product, itself- its user interface - its behavior - its present state of execution - its defects- its testability- its purpose
14. what the tester knows about the product- what just happened in the previous test - known problems with it- past problems with it - strengths and weaknesses - risk areas and magnitude of perceived risk - recent changes to it - direct observations of it- rumors about it - the nature of its users and user behavior - how it‘s supposed to work - how it‘s put together - how it‘s similar to or different from other products
15. what the tester would like to know about the product
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Tuesday, June 10, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment