Software Project Managment
Software Project Management is all about managing Planning, Monitoring and Execution of Project Management In general there are four successive processes that bring a system into being
Requirement Gathering
Feasibility Study
Project Planning
Project Execution
a) Requirement Gathering
The requirements process is a full system life cycle set of activities that includes:
Understanding the customers' needs and expectations
Identifying and analyzing the requirements
Defining the requirements
Clarifying and restating the requirements
Prioritizing requirements
Partitioning requirements
Tracking requirements
Managing requirements
Testing and verifying requirements
Validating requirements
Requirements analysis and management needs additional attention as a key factor in the success of systems and software development projects.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Friday, December 19, 2008
Recommended Requirements Gathering Practices
The following is a list of recommended requirements gathering practices. They are based on the author's extensive review of industry literature combined with the practical experiences of requirements analysts who have supported dozens of projects.
Understand a project vision and scope document.
Initiate a project glossary that provides definitions of words that are acceptable to and used by customers/users and the developers, and a list of acronyms to facilitate effective communication.
Evolve the real requirements via a "joint" customer/user and developer effort. Focus on product benefits (necessary requirements), not features. Address the minimum and highest priority requirements needed to meet real customer and user needs.
Document the rationale for each requirement (why it is needed).
Establish a mechanism to control changes to requirements and new requirements.
Prioritize the real requirements to determine those that should be met in the first release or product and those that can be addressed subsequently.
When the requirements are volatile (and perhaps even when they are not), consider an incremental development approach. This acknowledges that some of the requirements are "unknowable" until customers and users start using the system.
Use peer reviews and inspections of all requirements work products.
Use an industry-strength automated requirements tool.
Assign attributes to each requirement.
Provide traceability.
Maintain the history of each requirement.
Involve customers and users throughout the development effort.
Perform requirements validation and verification activities in the requirements gathering process to ensure that each requirement is testable.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Understand a project vision and scope document.
Initiate a project glossary that provides definitions of words that are acceptable to and used by customers/users and the developers, and a list of acronyms to facilitate effective communication.
Evolve the real requirements via a "joint" customer/user and developer effort. Focus on product benefits (necessary requirements), not features. Address the minimum and highest priority requirements needed to meet real customer and user needs.
Document the rationale for each requirement (why it is needed).
Establish a mechanism to control changes to requirements and new requirements.
Prioritize the real requirements to determine those that should be met in the first release or product and those that can be addressed subsequently.
When the requirements are volatile (and perhaps even when they are not), consider an incremental development approach. This acknowledges that some of the requirements are "unknowable" until customers and users start using the system.
Use peer reviews and inspections of all requirements work products.
Use an industry-strength automated requirements tool.
Assign attributes to each requirement.
Provide traceability.
Maintain the history of each requirement.
Involve customers and users throughout the development effort.
Perform requirements validation and verification activities in the requirements gathering process to ensure that each requirement is testable.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Quality testing dashboard
Quality testing dashboard
The tool would gather information as test cases were created, as problem reports were entered, and as test cases were executed. The data would automatically be gathered into a database and online up to the second, and reporting would be available at all times.
Because the test management system fosters a structured test process, it can provide several reports and processes that would otherwise require extensive manual data collection, organization, analysis, and reporting.
Throughout the lifecycle of a project, the test management system can provide relevant status reporting to facilitate planning, test execution, results tracking, and release decisions.
1. During test development, reports are available to determine what work has been completed and what tasks remain open.
2. During execution, the test management system tracks scripts that have been executed and those that have not, the result of the execution of each script, and the requirements coverage achieved and links to defects reported due to failed test cases, to provide a complete view of the release readiness.
Reports just based on defect tracking data show incomplete status; for example, a report that there are ten open defects does not tell much, unless we know how many test cases have been executed and how much requirements coverage is achieved by these test cases. We can use test management data to generate this missing information. Test case metrics complement defect reports metrics and give a better view of product quality.
Apart from this, other reports can be generated based on different attributes like type of test, modules, etc. Test management can provide objective, accurate, real-time information, which is just what is needed for deciding on the quality of a product. This is the most important benefit of having a structured testing process and tool. Based on test reports available, the product manager can make informed decisions about the quality of the application under development.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
The tool would gather information as test cases were created, as problem reports were entered, and as test cases were executed. The data would automatically be gathered into a database and online up to the second, and reporting would be available at all times.
Because the test management system fosters a structured test process, it can provide several reports and processes that would otherwise require extensive manual data collection, organization, analysis, and reporting.
Throughout the lifecycle of a project, the test management system can provide relevant status reporting to facilitate planning, test execution, results tracking, and release decisions.
1. During test development, reports are available to determine what work has been completed and what tasks remain open.
2. During execution, the test management system tracks scripts that have been executed and those that have not, the result of the execution of each script, and the requirements coverage achieved and links to defects reported due to failed test cases, to provide a complete view of the release readiness.
Reports just based on defect tracking data show incomplete status; for example, a report that there are ten open defects does not tell much, unless we know how many test cases have been executed and how much requirements coverage is achieved by these test cases. We can use test management data to generate this missing information. Test case metrics complement defect reports metrics and give a better view of product quality.
Apart from this, other reports can be generated based on different attributes like type of test, modules, etc. Test management can provide objective, accurate, real-time information, which is just what is needed for deciding on the quality of a product. This is the most important benefit of having a structured testing process and tool. Based on test reports available, the product manager can make informed decisions about the quality of the application under development.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Preferred Requirements Gathering Techniques - Risk Analysis 5
Risk Assessment for Projects
At least 50% of all projects (if not much more) are not successful in the sense that they do not achieve their objectives, or do not deliver the promised results, or sacrifice the predefined quality, or are not completed in the given time schedule, or use much more resources than originally planned.
There is a multitude of reasons for projects to fail. Projects often come on top of the usual work load and members of the project team belong to different departments, i.e. they have their first accountability to their line manager which often brings them into conflict with the project work. Team members have to work overtime if they want to complete their project tasks. At the end, project work is often sacrificed, and time budgets are often not sufficient.
What is mostly neglected: the occurrence of problems in project implementation increases with the complexity and length of the project.Larger and more complex projects that run over more than a year have other reasons of failure. Often these projects have permanent staff who are released from other tasks and work full time on the project, and well established budgets. However, those projects depend on a large number of external assumptions which influence their outcomes. It is impossible to clearly predict the future and the impact of various influence factors that are uncertain. Many project plans are too rigid to flexibly respond to changing needs.
Common to most projects is the lack of appropriate and transparent communication. Team members (and other stakeholders) often do not share a common understanding of the project's goals and strategies. It is important to unveil these misunderstandings and hidden agendas from the very beginning. The following tool, if applied in a project planning session helps to uncover issues that otherwise might remain undiscussed.
Explanations:
Business Level: Does the project have a strategic importance for the organization?
Length: How long is the intended implementation time?
Complexity: Does the project cover various business areas / objectives?
Technology: Is the technology to be applied well-established or is it a technology which yet has to be developed?
Number of organizational units involved: cross functional / geographical areas, etc.
Costs: estimated costs of the project
Overall risk of failure: How would you personally rank the risk that the project cannot achieve the objectives with the intended resources?
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
At least 50% of all projects (if not much more) are not successful in the sense that they do not achieve their objectives, or do not deliver the promised results, or sacrifice the predefined quality, or are not completed in the given time schedule, or use much more resources than originally planned.
There is a multitude of reasons for projects to fail. Projects often come on top of the usual work load and members of the project team belong to different departments, i.e. they have their first accountability to their line manager which often brings them into conflict with the project work. Team members have to work overtime if they want to complete their project tasks. At the end, project work is often sacrificed, and time budgets are often not sufficient.
What is mostly neglected: the occurrence of problems in project implementation increases with the complexity and length of the project.Larger and more complex projects that run over more than a year have other reasons of failure. Often these projects have permanent staff who are released from other tasks and work full time on the project, and well established budgets. However, those projects depend on a large number of external assumptions which influence their outcomes. It is impossible to clearly predict the future and the impact of various influence factors that are uncertain. Many project plans are too rigid to flexibly respond to changing needs.
Common to most projects is the lack of appropriate and transparent communication. Team members (and other stakeholders) often do not share a common understanding of the project's goals and strategies. It is important to unveil these misunderstandings and hidden agendas from the very beginning. The following tool, if applied in a project planning session helps to uncover issues that otherwise might remain undiscussed.
Explanations:
Business Level: Does the project have a strategic importance for the organization?
Length: How long is the intended implementation time?
Complexity: Does the project cover various business areas / objectives?
Technology: Is the technology to be applied well-established or is it a technology which yet has to be developed?
Number of organizational units involved: cross functional / geographical areas, etc.
Costs: estimated costs of the project
Overall risk of failure: How would you personally rank the risk that the project cannot achieve the objectives with the intended resources?
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Preferred Requirements Gathering Techniques - 5
Effort Estimation
Effort estimation consists in predict how many hours of work and how many workers are needed to develop a project. The effort invested in a software project is probably one of the most important and most analysed variables in recent years in the process of project management. The determination of the value of this variable when initiating software projects allows us to plan adequately any forthcoming activities. As far as estimation and prediction is concerned there is still a number of unsolved problems and errors. To obtain good results it is essential to take into consideration any previous projects. Estimating the effort with a high grade of reliability is a problem which has not yet been solved and even the project manager has to deal with it since the beginning.
Cost Estimation
It is the responsibility of the project manager to make accurate estimations of effort and cost. This is particularly true for projects subject to competitive bidding where a bid too high compared with competitors would result in loosing the contract or a bid too low could result in a loss to the organisation . This does not mean that internal projects are unimportant. From a project leaders estimate the management often decide whether to proceed with the project. Industry has a need for accurate estimates of effort and size at a very early stage in a project. However, when software cost estimates are done early in the software development process the estimate can be based on wrong or incomplete requirements. A software cost estimate process is the set of techniques and procedures that an organisation use to arrive at an estimate. An important aspect of software projects is to know the cost, The major contributing factor is effort.
Why SCE is difficult and error prone ?
Software cost estimation requires a significant amount of effort to perform it correctly.
SCE is often done hurriedly, without an appreciation for the effort required.
You need experience at developing estimates, especially for large projects.
Human bias i.e An Estimator is likely to consider how long a certain portion of the system would take, and then to merely extrapolate this estimate to the rest of the system, ignoring the non-linear aspects of software development.
The causes of poor and inaccurate estimation
imprecise and drifting requirements
new software projects are nearly always different form the last.
software practitioners don't collect enough information about past projects.
estimates are forced to match the resources available.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Effort estimation consists in predict how many hours of work and how many workers are needed to develop a project. The effort invested in a software project is probably one of the most important and most analysed variables in recent years in the process of project management. The determination of the value of this variable when initiating software projects allows us to plan adequately any forthcoming activities. As far as estimation and prediction is concerned there is still a number of unsolved problems and errors. To obtain good results it is essential to take into consideration any previous projects. Estimating the effort with a high grade of reliability is a problem which has not yet been solved and even the project manager has to deal with it since the beginning.
Cost Estimation
It is the responsibility of the project manager to make accurate estimations of effort and cost. This is particularly true for projects subject to competitive bidding where a bid too high compared with competitors would result in loosing the contract or a bid too low could result in a loss to the organisation . This does not mean that internal projects are unimportant. From a project leaders estimate the management often decide whether to proceed with the project. Industry has a need for accurate estimates of effort and size at a very early stage in a project. However, when software cost estimates are done early in the software development process the estimate can be based on wrong or incomplete requirements. A software cost estimate process is the set of techniques and procedures that an organisation use to arrive at an estimate. An important aspect of software projects is to know the cost, The major contributing factor is effort.
Why SCE is difficult and error prone ?
Software cost estimation requires a significant amount of effort to perform it correctly.
SCE is often done hurriedly, without an appreciation for the effort required.
You need experience at developing estimates, especially for large projects.
Human bias i.e An Estimator is likely to consider how long a certain portion of the system would take, and then to merely extrapolate this estimate to the rest of the system, ignoring the non-linear aspects of software development.
The causes of poor and inaccurate estimation
imprecise and drifting requirements
new software projects are nearly always different form the last.
software practitioners don't collect enough information about past projects.
estimates are forced to match the resources available.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Preferred Requirements Gathering Techniques - 4
Effort Estimation
Effort estimation consists in predict how many hours of work and how many workers are needed to develop a project. The effort invested in a software project is probably one of the most important and most analysed variables in recent years in the process of project management. The determination of the value of this variable when initiating software projects allows us to plan adequately any forthcoming activities. As far as estimation and prediction is concerned there is still a number of unsolved problems and errors. To obtain good results it is essential to take into consideration any previous projects. Estimating the effort with a high grade of reliability is a problem which has not yet been solved and even the project manager has to deal with it since the beginning.
Cost Estimation
It is the responsibility of the project manager to make accurate estimations of effort and cost. This is particularly true for projects subject to competitive bidding where a bid too high compared with competitors would result in loosing the contract or a bid too low could result in a loss to the organisation . This does not mean that internal projects are unimportant. From a project leaders estimate the management often decide whether to proceed with the project. Industry has a need for accurate estimates of effort and size at a very early stage in a project. However, when software cost estimates are done early in the software development process the estimate can be based on wrong or incomplete requirements. A software cost estimate process is the set of techniques and procedures that an organisation use to arrive at an estimate. An important aspect of software projects is to know the cost, The major contributing factor is effort.
Why SCE is difficult and error prone ?
Software cost estimation requires a significant amount of effort to perform it correctly.
SCE is often done hurriedly, without an appreciation for the effort required.
You need experience at developing estimates, especially for large projects.
Human bias i.e An Estimator is likely to consider how long a certain portion of the system would take, and then to merely extrapolate this estimate to the rest of the system, ignoring the non-linear aspects of software development.
The causes of poor and inaccurate estimation
imprecise and drifting requirements
new software projects are nearly always different form the last.
software practitioners don't collect enough information about past projects.
estimates are forced to match the resources available.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Effort estimation consists in predict how many hours of work and how many workers are needed to develop a project. The effort invested in a software project is probably one of the most important and most analysed variables in recent years in the process of project management. The determination of the value of this variable when initiating software projects allows us to plan adequately any forthcoming activities. As far as estimation and prediction is concerned there is still a number of unsolved problems and errors. To obtain good results it is essential to take into consideration any previous projects. Estimating the effort with a high grade of reliability is a problem which has not yet been solved and even the project manager has to deal with it since the beginning.
Cost Estimation
It is the responsibility of the project manager to make accurate estimations of effort and cost. This is particularly true for projects subject to competitive bidding where a bid too high compared with competitors would result in loosing the contract or a bid too low could result in a loss to the organisation . This does not mean that internal projects are unimportant. From a project leaders estimate the management often decide whether to proceed with the project. Industry has a need for accurate estimates of effort and size at a very early stage in a project. However, when software cost estimates are done early in the software development process the estimate can be based on wrong or incomplete requirements. A software cost estimate process is the set of techniques and procedures that an organisation use to arrive at an estimate. An important aspect of software projects is to know the cost, The major contributing factor is effort.
Why SCE is difficult and error prone ?
Software cost estimation requires a significant amount of effort to perform it correctly.
SCE is often done hurriedly, without an appreciation for the effort required.
You need experience at developing estimates, especially for large projects.
Human bias i.e An Estimator is likely to consider how long a certain portion of the system would take, and then to merely extrapolate this estimate to the rest of the system, ignoring the non-linear aspects of software development.
The causes of poor and inaccurate estimation
imprecise and drifting requirements
new software projects are nearly always different form the last.
software practitioners don't collect enough information about past projects.
estimates are forced to match the resources available.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Preferred Requirements Gathering Techniques - 3
Interfaces Analysis.
Missing or incorrect interfaces are often a major cause of cost overruns and product failures. Identifying external interfaces early clarifies product scope, aids risk assessment, reduces product development costs, and improves customer satisfaction. The steps of identifying, simplifying, controlling, documenting, communicating, and monitoring interfaces help to reduce the risk of problems related to interfaces.
Please see attached Requirement Analysis Template
b) The feasibility Study
The Feasibility Study uses technical information and cost data to determine the economic potential and practicality (i.e. feasibility) of a project. The Feasibility Study uses techniques that help evaluate a project and/or compare it with other projects. Factors such as interest rates, operating costs, and depreciation are generally considered. The following questions are answered during feasibility study
An abstract definition of problem
Formulation of different Solution strategy
Examination of alternative solution strategy (in terms of benefits, Resource requirement, costs etc)
Cost and benefit analysis to determine the best strategy
Who uses it?
Technical Architect, Business Analyst, Configuration Manager, Development Manager, Project Manager, IT Manager, System Administrator, Test Manager, Documentation Manager, Technical Writers, System Administrator.
When is it used?
The Feasibility Study analyses potential solutions against a set of requirements, evaluates their ability to meet these objectives, describe a recommended solution, and offer a justification for this selection.
c) Project Planning
When a project is estimated to be feasible, project planning is done. Project planning consist of the following steps
Effort, Cost, Resource and Project Duration planning
Risk Analysis and mitigation plan
Project Scheduling
Staffing organization and Staffing Plan
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Missing or incorrect interfaces are often a major cause of cost overruns and product failures. Identifying external interfaces early clarifies product scope, aids risk assessment, reduces product development costs, and improves customer satisfaction. The steps of identifying, simplifying, controlling, documenting, communicating, and monitoring interfaces help to reduce the risk of problems related to interfaces.
Please see attached Requirement Analysis Template
b) The feasibility Study
The Feasibility Study uses technical information and cost data to determine the economic potential and practicality (i.e. feasibility) of a project. The Feasibility Study uses techniques that help evaluate a project and/or compare it with other projects. Factors such as interest rates, operating costs, and depreciation are generally considered. The following questions are answered during feasibility study
An abstract definition of problem
Formulation of different Solution strategy
Examination of alternative solution strategy (in terms of benefits, Resource requirement, costs etc)
Cost and benefit analysis to determine the best strategy
Who uses it?
Technical Architect, Business Analyst, Configuration Manager, Development Manager, Project Manager, IT Manager, System Administrator, Test Manager, Documentation Manager, Technical Writers, System Administrator.
When is it used?
The Feasibility Study analyses potential solutions against a set of requirements, evaluates their ability to meet these objectives, describe a recommended solution, and offer a justification for this selection.
c) Project Planning
When a project is estimated to be feasible, project planning is done. Project planning consist of the following steps
Effort, Cost, Resource and Project Duration planning
Risk Analysis and mitigation plan
Project Scheduling
Staffing organization and Staffing Plan
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Preferred Requirements Gathering Techniques - 2
Prototyping.
Prototyping is a technique for building a quick and rough version of a desired system or parts of that system. The prototype illustrates the capabilities of the system to users and designers. It serves as a communications mechanism to allow reviewers to understand interactions with the system. Prototyping sometimes gives an impression that developers are further along than is actually the case, giving users an overly optimistic impression of completion possibilities. Prototypes can be combined effectively with other approaches such as JAD and models.
Use Cases.
A use case is a picture of actions a system performs, depicting the actors. It should be accompanied by a textual description and not be used in isolation of other requirements gathering techniques. Use cases should always be supplemented with quality attributes and other information such as interface characteristics. Many developers believe that use cases and scenarios (descriptions of sequences of events) facilitate team communication. They provide a context for the requirements by expressing sequences of events and a common language for end users and the technical team.
Be cautioned that use cases alone do not provide enough information to enable development activities. Other requirements elicitation techniques should also be used in conjunction with use cases. Use operational concepts as a simple, cost-effective way to build a consensus among stakeholders and to address two large classes of requirements errors: omitted requirements and conflicting requirements. Operational concepts identify user interface issues early, provide opportunities for early validation, and form a foundation for testing scenarios in product verification.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Prototyping is a technique for building a quick and rough version of a desired system or parts of that system. The prototype illustrates the capabilities of the system to users and designers. It serves as a communications mechanism to allow reviewers to understand interactions with the system. Prototyping sometimes gives an impression that developers are further along than is actually the case, giving users an overly optimistic impression of completion possibilities. Prototypes can be combined effectively with other approaches such as JAD and models.
Use Cases.
A use case is a picture of actions a system performs, depicting the actors. It should be accompanied by a textual description and not be used in isolation of other requirements gathering techniques. Use cases should always be supplemented with quality attributes and other information such as interface characteristics. Many developers believe that use cases and scenarios (descriptions of sequences of events) facilitate team communication. They provide a context for the requirements by expressing sequences of events and a common language for end users and the technical team.
Be cautioned that use cases alone do not provide enough information to enable development activities. Other requirements elicitation techniques should also be used in conjunction with use cases. Use operational concepts as a simple, cost-effective way to build a consensus among stakeholders and to address two large classes of requirements errors: omitted requirements and conflicting requirements. Operational concepts identify user interface issues early, provide opportunities for early validation, and form a foundation for testing scenarios in product verification.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Preferred Requirements Gathering Techniques - 1
Preferred Requirements Gathering Techniques
Following are a set of recommended requirements elicitation techniques. These techniques can be used in combination. Their advantages are that they are effective in emerging the real requirements for planned development efforts.
Interviews
Interviews are used to gather information. However, the predisposition, experience, understanding, and bias of the person being interviewed influence the information obtained. The use of context-free questions by the interviewer helps avoid prejudicing the response . A context-free question is a question that does not suggest a particular response. For example, who is the client for this system? What is the real reason for wanting to solve this problem? What environment is this product likely to encounter? What kind of product precision is required?
Document Analysis
All effective requirements elicitation involves some level of document analysis such as business plans, market studies, contracts, requests for proposals, statements of work, existing guidelines, analyses of existing systems, and procedures. Improved requirements coverage results from identifying and consulting all likely sources of requirements.
Brainstorming
Brainstorming involves both idea generation and idea reduction. The goal of the former is to identify as many ideas as possible, while the latter ranks the ideas into those considered most useful by the group. Brainstorming is a powerful technique because the most creative or effective ideas often result from combining seemingly unrelated ideas. Also, this technique encourages original thinking and unusual ideas.
Requirements Workshops.
Requirements workshops are a powerful technique for eliciting requirements because they can be designed to encourage consensus concerning the requirements of a particular capability. They are best facilitated by an outside expert and are typically short (one or a few days). Other advantages are often achieved -- participant commitment to the work products and project success, teamwork, resolution of political issues, and reaching consensus on a host of topics. Benefits of requirements workshops include the following:
Workshop costs are often lower than are those for multiple interviews.
They help to give structure to the requirements capture and analysis process.
They are dynamic, interactive, and cooperative.
They involve users and cut across organizational boundaries.
They help to identify and prioritize needs and resolve contentious issues.
When properly run, they help to manage user's expectations and attitude toward change
A special category of requirements workshop is a Joint Application Development (JAD) workshop. JAD is a method for developing requirements through which customers, user representatives, and developers work together with a facilitator to produce a requirements specification that both sides support.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Following are a set of recommended requirements elicitation techniques. These techniques can be used in combination. Their advantages are that they are effective in emerging the real requirements for planned development efforts.
Interviews
Interviews are used to gather information. However, the predisposition, experience, understanding, and bias of the person being interviewed influence the information obtained. The use of context-free questions by the interviewer helps avoid prejudicing the response . A context-free question is a question that does not suggest a particular response. For example, who is the client for this system? What is the real reason for wanting to solve this problem? What environment is this product likely to encounter? What kind of product precision is required?
Document Analysis
All effective requirements elicitation involves some level of document analysis such as business plans, market studies, contracts, requests for proposals, statements of work, existing guidelines, analyses of existing systems, and procedures. Improved requirements coverage results from identifying and consulting all likely sources of requirements.
Brainstorming
Brainstorming involves both idea generation and idea reduction. The goal of the former is to identify as many ideas as possible, while the latter ranks the ideas into those considered most useful by the group. Brainstorming is a powerful technique because the most creative or effective ideas often result from combining seemingly unrelated ideas. Also, this technique encourages original thinking and unusual ideas.
Requirements Workshops.
Requirements workshops are a powerful technique for eliciting requirements because they can be designed to encourage consensus concerning the requirements of a particular capability. They are best facilitated by an outside expert and are typically short (one or a few days). Other advantages are often achieved -- participant commitment to the work products and project success, teamwork, resolution of political issues, and reaching consensus on a host of topics. Benefits of requirements workshops include the following:
Workshop costs are often lower than are those for multiple interviews.
They help to give structure to the requirements capture and analysis process.
They are dynamic, interactive, and cooperative.
They involve users and cut across organizational boundaries.
They help to identify and prioritize needs and resolve contentious issues.
When properly run, they help to manage user's expectations and attitude toward change
A special category of requirements workshop is a Joint Application Development (JAD) workshop. JAD is a method for developing requirements through which customers, user representatives, and developers work together with a facilitator to produce a requirements specification that both sides support.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Glossary - A (part 1)
A
Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.
Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.
Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.
Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.
Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.
Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.
Automated Testing: Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing. The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.
Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.
Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.
Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.
Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.
Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.
Automated Testing: Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing. The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Thursday, December 18, 2008
Tools Available - 5
Tools Available - 5
eValid - Web test tool from Software Research, Inc that uses a 'Test Enabled Web Browser' test engine that provides browser-based client side quality checking, dynamic testing, content validation, page performance tuning, and webserver load and capacity analysis. Utilizes multiple validation methods.
Rational Functional Tester - IBM's (formerly Rational's) automated tool for testing of Java, .NET, and web-based applications. Enables data-driven testing, choice of scripting languages and editors. For Windows and Linux.
e-Test Suite - Integrated functional/regression test tool from Empirix for web applications and services and .NET and J2EE applications; includes site monitoring and load testing capabilities, and record/playback, scripting language, test process management capabilities. Includes full VBA script development environment and options such as javascript, C++, etc. DOM-based testing and validation; 'Data Bank Wizard' simplifies creation of data-driven tests. Evaluation version available.
QuickTest Pro - Functional/regression test tool from Mercury; includes support for testing Web, Java, ERP, etc.
Winrunner - Functional/regression test tool from Mercury; includes support for testing Web, Java, ERP, etc.
Compuware's QARun - QARun for functional/regression testing of web, Java, and other applications. Handles ActiveX, HTML, DHTML, XML, Java beans, and more.
SilkTest - Functional test tool from Segue for Web, Java or traditional client/server-based applications. Features include: test creation and customization, test planning and management, direct database access and validation, recovery system for unattended testing, and IDE for developing, editing, compiling, running, and debugging scripts, test plans, etc.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
eValid - Web test tool from Software Research, Inc that uses a 'Test Enabled Web Browser' test engine that provides browser-based client side quality checking, dynamic testing, content validation, page performance tuning, and webserver load and capacity analysis. Utilizes multiple validation methods.
Rational Functional Tester - IBM's (formerly Rational's) automated tool for testing of Java, .NET, and web-based applications. Enables data-driven testing, choice of scripting languages and editors. For Windows and Linux.
e-Test Suite - Integrated functional/regression test tool from Empirix for web applications and services and .NET and J2EE applications; includes site monitoring and load testing capabilities, and record/playback, scripting language, test process management capabilities. Includes full VBA script development environment and options such as javascript, C++, etc. DOM-based testing and validation; 'Data Bank Wizard' simplifies creation of data-driven tests. Evaluation version available.
QuickTest Pro - Functional/regression test tool from Mercury; includes support for testing Web, Java, ERP, etc.
Winrunner - Functional/regression test tool from Mercury; includes support for testing Web, Java, ERP, etc.
Compuware's QARun - QARun for functional/regression testing of web, Java, and other applications. Handles ActiveX, HTML, DHTML, XML, Java beans, and more.
SilkTest - Functional test tool from Segue for Web, Java or traditional client/server-based applications. Features include: test creation and customization, test planning and management, direct database access and validation, recovery system for unattended testing, and IDE for developing, editing, compiling, running, and debugging scripts, test plans, etc.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Tools Available - 4
Tools Available - 4
HTTP-WebTest - A Perl module which runs tests on remote URLs or local Web files containing Perl/JSP/HTML/JavaScript/etc., and generates a detailed test report. This module can be used "as-is" or its functionality can be extended using plugins. Plugins can define test types and provide additional report capabilities. This module comes with a set of default plugins, but can be easily extended with third-party plugins. Open-source project maintained by Ilya Martynov.
HttpUnit - Open source Java program for accessing web sites without a browser, from SourceForge.net/Open Source Development Network, designed and implemented by Russell Gold. Ideally suited for automated unit testing of web sites when combined with a Java unit test framework such as JUnit. Emulates the relevant portions of browser behavior, including form submission, basic http authentication, cookies and automatic page redirection, and allows Java test code to examine returned pages as text, an XML DOM, or containers of forms, tables, and links. Includes ServletUnit to test servlets without a servlet container.
iOpus Internet Macros - Macro recorder utility from iOpus Inc. automates repetitious aspects of web site testing. Records any combination of browsing, form filling, clicking, script testing and information gathering; assists user during the recording with visual feedback. Power users can manually edit a recorded macro. A command line interface allows for easy integration with other test software. Works by remote controlling the browser, thus automatically supports advanced features such as SSL, HTTP-Redirects and cookies. Can handle data input from text files, databases, or XML. Can extract web data and save as CSV file or process the data via a script. For Windows and MSIE.
MaxQ - Free open-source web functional testing tool from Tigris.org, written in Java. Works as a proxy server; includes an HTTP proxy recorder to automate test script generation, and a mechanism for playing tests back from the GUI and command line. Jython is used as the scripting language, and JUnit is used as the testing library.
TestWeb - Test tool from Original Software Group Ltd. utilizes a new approach to recording/playback of web browser scripts. It analyses the underlying intentions of the script and executes it by direct communication with web page elements. IntelliScripting logic removes the reliance on specific browser window sizes, component location and mouse movements for accurate replay, for easier script maintenance; supports hyperlinks targeted at new instances of browser. Playback can run in background while other tasks are performed on the same machine.
Compuware TestPartner - Automated software testing tool from Compuware designed specifically to validate Windows, Java, and web-based applications. The 'TestPartner Visual Navigator' can create visual-based tests, or MS VBA can be used for customized scripting.
WebKing - Web site functional, load, and static analysis test suite from ParaSoft. Maps and tests all possible paths through a dynamic site; can enforce over 200 HTML, CSS, JavaScript, 508 compliance, WML and XHTML coding standards or customized standards. Allows creation of rules for automatic monitoring of dynamic page content. Can run load tests based on the tool's analysis of web server log files. For Windows, Linux, Solaris.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
HTTP-WebTest - A Perl module which runs tests on remote URLs or local Web files containing Perl/JSP/HTML/JavaScript/etc., and generates a detailed test report. This module can be used "as-is" or its functionality can be extended using plugins. Plugins can define test types and provide additional report capabilities. This module comes with a set of default plugins, but can be easily extended with third-party plugins. Open-source project maintained by Ilya Martynov.
HttpUnit - Open source Java program for accessing web sites without a browser, from SourceForge.net/Open Source Development Network, designed and implemented by Russell Gold. Ideally suited for automated unit testing of web sites when combined with a Java unit test framework such as JUnit. Emulates the relevant portions of browser behavior, including form submission, basic http authentication, cookies and automatic page redirection, and allows Java test code to examine returned pages as text, an XML DOM, or containers of forms, tables, and links. Includes ServletUnit to test servlets without a servlet container.
iOpus Internet Macros - Macro recorder utility from iOpus Inc. automates repetitious aspects of web site testing. Records any combination of browsing, form filling, clicking, script testing and information gathering; assists user during the recording with visual feedback. Power users can manually edit a recorded macro. A command line interface allows for easy integration with other test software. Works by remote controlling the browser, thus automatically supports advanced features such as SSL, HTTP-Redirects and cookies. Can handle data input from text files, databases, or XML. Can extract web data and save as CSV file or process the data via a script. For Windows and MSIE.
MaxQ - Free open-source web functional testing tool from Tigris.org, written in Java. Works as a proxy server; includes an HTTP proxy recorder to automate test script generation, and a mechanism for playing tests back from the GUI and command line. Jython is used as the scripting language, and JUnit is used as the testing library.
TestWeb - Test tool from Original Software Group Ltd. utilizes a new approach to recording/playback of web browser scripts. It analyses the underlying intentions of the script and executes it by direct communication with web page elements. IntelliScripting logic removes the reliance on specific browser window sizes, component location and mouse movements for accurate replay, for easier script maintenance; supports hyperlinks targeted at new instances of browser. Playback can run in background while other tasks are performed on the same machine.
Compuware TestPartner - Automated software testing tool from Compuware designed specifically to validate Windows, Java, and web-based applications. The 'TestPartner Visual Navigator' can create visual-based tests, or MS VBA can be used for customized scripting.
WebKing - Web site functional, load, and static analysis test suite from ParaSoft. Maps and tests all possible paths through a dynamic site; can enforce over 200 HTML, CSS, JavaScript, 508 compliance, WML and XHTML coding standards or customized standards. Allows creation of rules for automatic monitoring of dynamic page content. Can run load tests based on the tool's analysis of web server log files. For Windows, Linux, Solaris.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Tools Available - 3
Tools Available - 3
TestAgent - Capture/playback tool for user acceptance testing from Strenuus, LLC. Key features besides capture/playback include automatically detecting and capturing standard and custom content errors. Reports information needed to troubleshoot problems. Enables 'Persistent Acceptance Testing' that activates tests each time a web application is used.
MITS.GUI - Unique test automation tool from Omsphere LLC; has an intelligent state machine engine that makes real-time decisions for navigating through the GUI portion of an application. It can test thousands of test scenarios without use of any scripts. Allows creation of completely new test scenarios without ever having performed that test before, all without changing tool, testware architecture (object names, screen names, etc), or logic associated with the engine. Testers enter test data into a spreadsheet used to populate objects that appear for the particular test scenario defined.
Badboy - Tool from Bradley Software to aid in building and testing dynamic web based applications. Combines sophisticated capture/replay ability with performance testing and regression features. Free for most uses; source code avalable.
SAMIE - Free tool designed for QA engineers - 'Simple Automated Module For Internet Explorer'. Perl module that allows a user to automate use of IE via Perl scripts; Written in ActivePerl, allowing inheritance of all Perl functionality including regular expressions, Perl dbi database access, many Perl cpan library functions. Uses IE's built in COM object which provides a reference to the DOM for each browser window or frame. Easy development and maintenance - no need to keep track of GUI maps for each window. For Windows.
PAMIE - Free open-source 'Python Automated Module For Internet Explorer' Allows control of an instance of MSIE and access to it's methods though OLE automation . Utilizes Collections, Methods, Events and Properties exposed by the DHTML Object Model.
PureTest - Free tool from Minq Software AB, includes an HTTP Recorder and Web Crawler. Create scenarios using the point and click interface. Includes a scenario debugger including single step, break points and response introspection. Supports HTTPS/SSL, dynamic Web applications, data driven scenarios, and parsing of response codes or parsing page content for expected or unexpected strings. Includes a Task API for building custom test tasks. The Web Crawler is useful for verifying consistency of a static web structure, reporting various metrics, broken links and the structure of the crawled web. Multi-platform - written in Java.
Solex - Web application testing tool built as a plug-in for the Eclipse IDE (an open, extensible IDE). Records HTTP messages by acting as a Web proxy; recorded sessions can be saved as XML and reopened later. HTTP requests and responses are fully displayed in order to inspect and customize their content. Allows the attachment of extraction or replacement rules to any HTTP message content, and assertions to responses in order to validate a scenario during its playback.
QA Wizard - Automated functional web test tool from Seapine Software. Advanced object binding reduces script changes when Web-based apps change. Next-generation scripting language eliminates problems created by syntax or other language errors. Includes capability for automated scripting, allowing creation of more scripts in less time. Supports unlimited set of ODBC-compatible data sources as well as MS Excel, tab/comma delimited file formats, and more. Free Demo and Test Script available. For Windows platforms.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
TestAgent - Capture/playback tool for user acceptance testing from Strenuus, LLC. Key features besides capture/playback include automatically detecting and capturing standard and custom content errors. Reports information needed to troubleshoot problems. Enables 'Persistent Acceptance Testing' that activates tests each time a web application is used.
MITS.GUI - Unique test automation tool from Omsphere LLC; has an intelligent state machine engine that makes real-time decisions for navigating through the GUI portion of an application. It can test thousands of test scenarios without use of any scripts. Allows creation of completely new test scenarios without ever having performed that test before, all without changing tool, testware architecture (object names, screen names, etc), or logic associated with the engine. Testers enter test data into a spreadsheet used to populate objects that appear for the particular test scenario defined.
Badboy - Tool from Bradley Software to aid in building and testing dynamic web based applications. Combines sophisticated capture/replay ability with performance testing and regression features. Free for most uses; source code avalable.
SAMIE - Free tool designed for QA engineers - 'Simple Automated Module For Internet Explorer'. Perl module that allows a user to automate use of IE via Perl scripts; Written in ActivePerl, allowing inheritance of all Perl functionality including regular expressions, Perl dbi database access, many Perl cpan library functions. Uses IE's built in COM object which provides a reference to the DOM for each browser window or frame. Easy development and maintenance - no need to keep track of GUI maps for each window. For Windows.
PAMIE - Free open-source 'Python Automated Module For Internet Explorer' Allows control of an instance of MSIE and access to it's methods though OLE automation . Utilizes Collections, Methods, Events and Properties exposed by the DHTML Object Model.
PureTest - Free tool from Minq Software AB, includes an HTTP Recorder and Web Crawler. Create scenarios using the point and click interface. Includes a scenario debugger including single step, break points and response introspection. Supports HTTPS/SSL, dynamic Web applications, data driven scenarios, and parsing of response codes or parsing page content for expected or unexpected strings. Includes a Task API for building custom test tasks. The Web Crawler is useful for verifying consistency of a static web structure, reporting various metrics, broken links and the structure of the crawled web. Multi-platform - written in Java.
Solex - Web application testing tool built as a plug-in for the Eclipse IDE (an open, extensible IDE). Records HTTP messages by acting as a Web proxy; recorded sessions can be saved as XML and reopened later. HTTP requests and responses are fully displayed in order to inspect and customize their content. Allows the attachment of extraction or replacement rules to any HTTP message content, and assertions to responses in order to validate a scenario during its playback.
QA Wizard - Automated functional web test tool from Seapine Software. Advanced object binding reduces script changes when Web-based apps change. Next-generation scripting language eliminates problems created by syntax or other language errors. Includes capability for automated scripting, allowing creation of more scripts in less time. Supports unlimited set of ODBC-compatible data sources as well as MS Excel, tab/comma delimited file formats, and more. Free Demo and Test Script available. For Windows platforms.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Tools Available - 2
Tools Available - 2
WebInject - Open source tool in PERL for automated testing of web applications and services. Can be used to unit test any individual component with an HTTP interface (JSP, ASP, CGI, PHP, servlets, HTML forms, etc.) or it can be used to create a suite of HTTP level functional or regression tests.
Site Test Center - Functional and performance test tool from Alliance Software Engineering. Has an XML-based scripting capability to enable modifying captured scripts or creating new scripts. Utilizes a distributed testing model and consists of three parts: STC Administrator, STC Master and STC Master Service.
jWebUnit - Open source Java framework that facilitates creation of acceptance tests for web applications. Provides a high-level API for navigating a web application combined with a set of assertions to verify the application's correctness including navigation via links, form entry and submission, validation of table contents, and other typical business web application features. Utilizes HttpUnit behind the scenes. The simple navigation methods and ready-to-use assertions allow for more rapid test creation than using only JUnit and HttpUnit.
SimpleTest - Open source unit testing framework which aims to be a complete PHP developer test solution. Includes all of the typical functions that would be expected from JUnit and the PHPUnit ports, but also adds mock objects; has some JWebUnit functionality as well. This includes web page navigation, cookie testing and form submission.
WinTask - Macro recorder from TaskWare, automates repetitive tasks for Web site testing (and standard Windows applications), with its HTML objects recognition. Includes capability to expand scope of macros by editing and adding loops, branching statements, etc. (300+ commands); ensure robustness of scripts with Synchronization commands. Includes a WinTask Scheduler.
TestCaseMaker/Runner - Test case document driven functional test tool for web applications from Agile Web Development. Maker creates test case documents, and Runner executes the test case document; test case documents are always synchronized with the application. Free including source code.
Canoo WebTest - Free Java Open Source tool for automatic functional testing of web applications. XML-based test script code is editable with user's preferred XML editor; until recording capabilities are added, scripts have to be developed manually. Can group tests into a testsuite that again can be part of a bigger testsuite. Test results are reported in either plain text or XML format for later presentation via XSLT. Standard reporting XSLT stylesheets included, and can be adapted to any reporting style or requirements.
TestSmith - Functional/Regression test tool from Quality Forge. Includes an Intelligent, HTML/DOM-Aware and Object Mode Recording Engine, and a Data-Driven, Adaptable and Multi-Threaded Playback Engine. Handles Applets, Flash, Active-X controls, animated bitmaps, etc. Controls are recorded as individual objects independent of screen positions or resolution; playback window/size can be different than in capture. Special validation points, such as bitmap or text matching, can be inserted during a recording, but all recorded items are validated and logged 'on the fly'. Fuzzy matching capabilities. Editable scripts can be recorded in SmithSript language or in Java, C++ or C++/MFC. 90-day evaluation copy available.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
WebInject - Open source tool in PERL for automated testing of web applications and services. Can be used to unit test any individual component with an HTTP interface (JSP, ASP, CGI, PHP, servlets, HTML forms, etc.) or it can be used to create a suite of HTTP level functional or regression tests.
Site Test Center - Functional and performance test tool from Alliance Software Engineering. Has an XML-based scripting capability to enable modifying captured scripts or creating new scripts. Utilizes a distributed testing model and consists of three parts: STC Administrator, STC Master and STC Master Service.
jWebUnit - Open source Java framework that facilitates creation of acceptance tests for web applications. Provides a high-level API for navigating a web application combined with a set of assertions to verify the application's correctness including navigation via links, form entry and submission, validation of table contents, and other typical business web application features. Utilizes HttpUnit behind the scenes. The simple navigation methods and ready-to-use assertions allow for more rapid test creation than using only JUnit and HttpUnit.
SimpleTest - Open source unit testing framework which aims to be a complete PHP developer test solution. Includes all of the typical functions that would be expected from JUnit and the PHPUnit ports, but also adds mock objects; has some JWebUnit functionality as well. This includes web page navigation, cookie testing and form submission.
WinTask - Macro recorder from TaskWare, automates repetitive tasks for Web site testing (and standard Windows applications), with its HTML objects recognition. Includes capability to expand scope of macros by editing and adding loops, branching statements, etc. (300+ commands); ensure robustness of scripts with Synchronization commands. Includes a WinTask Scheduler.
TestCaseMaker/Runner - Test case document driven functional test tool for web applications from Agile Web Development. Maker creates test case documents, and Runner executes the test case document; test case documents are always synchronized with the application. Free including source code.
Canoo WebTest - Free Java Open Source tool for automatic functional testing of web applications. XML-based test script code is editable with user's preferred XML editor; until recording capabilities are added, scripts have to be developed manually. Can group tests into a testsuite that again can be part of a bigger testsuite. Test results are reported in either plain text or XML format for later presentation via XSLT. Standard reporting XSLT stylesheets included, and can be adapted to any reporting style or requirements.
TestSmith - Functional/Regression test tool from Quality Forge. Includes an Intelligent, HTML/DOM-Aware and Object Mode Recording Engine, and a Data-Driven, Adaptable and Multi-Threaded Playback Engine. Handles Applets, Flash, Active-X controls, animated bitmaps, etc. Controls are recorded as individual objects independent of screen positions or resolution; playback window/size can be different than in capture. Special validation points, such as bitmap or text matching, can be inserted during a recording, but all recorded items are validated and logged 'on the fly'. Fuzzy matching capabilities. Editable scripts can be recorded in SmithSript language or in Java, C++ or C++/MFC. 90-day evaluation copy available.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Tools Available - 1
Tools Available - 1
IeUnit - IeUnit is an open-source simple framework to test logical behaviors of web pages, released under IBM's Common Public License. It helps users to create, organize and execute functional unit tests. Includes a test runner with GUI interface. Implemented in JavaScript for the Windows XP platform with Internet Explorer.
QEngine Web Test Studio - Web functional test tool from AdventNet. Scripting uses Jython; records using page elements controls symbolically rather than with raw screen coordinate. Secure recording on password fields; data-driven Test wizard to fetch script data from external source; provision to add GUI, Database and File checkpoints and verify database tables, files, page titles and HTML element properties. Supports keyword-driven testing, built-in exception handling and reporting facility. Works with a variety of browsers and OS's. Free and professional versions available.
AppPerfect DevSuite - Suite of testing, tuning, and monitoring products from AppPefect Corp. that includes a web functional testing module. Records browser interaction by element instead of screen co-ordinates. Supports handling dynamic content created by JavaScript; supports ASP, JSP, HTML, cookies, SSL. For Windows and MSIE; integrates with a variety of IDE's.
JStudio SiteWalker - Test tool from Jarsch Software Studio allows capture/replay recording; fail definitions can be specified for each step of the automated workflow via JavaScript. JavaScript's Document Object Model enables full access to all document elements. Test data from any database or Excel spreadsheet can be mapped to enter values automatically into HTML form controls. HTML-based test result reports can be generated. Shareware for Windows/MSIE.
Test Complete Enterprise - Automated test tool from AutomatedQA Corp. includes web functional testing capabilities. Works with Internet Explorer. QEngine - Test tool from AdventNet enables functional testing of Web sites and Web-based applications. Record and playback capability; automatic recording of any Web browser events and translates into an Python editable scripts. Includes Script Editor, Application Map Editor to view and edit the map object properties. Supports multiple OS's and browsers.
actiWate - Java-based Web application testing environment from Actimind Inc. Advanced framework for writing test scripts in Java (similar to open-source frameworks like HttpUnit, HtmlUnit etc. but with extended API), and Test Writing Assistant - Web browser plug-in module to assist the test writing process. Freeware.
KUMO Editor - Toolset from Softmorning LTD for creation and editing of web macros and automated web tests. Includes syntax-coloring editor with intellisense, autocomplete, run-time debugging features. Macro recorder transforms any click to a C# directive. Page objects navigator allows browsing of hierarchy of web objects in a page. Enables creation of scenarios from spreadsheets; and loop, retry on error, robust handling of page modifications. Can export created .DLL and .EXE files to enable running web macros on demand and integration into other software frameworks. Multilingual for Asian, eastern and western European languages.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
IeUnit - IeUnit is an open-source simple framework to test logical behaviors of web pages, released under IBM's Common Public License. It helps users to create, organize and execute functional unit tests. Includes a test runner with GUI interface. Implemented in JavaScript for the Windows XP platform with Internet Explorer.
QEngine Web Test Studio - Web functional test tool from AdventNet. Scripting uses Jython; records using page elements controls symbolically rather than with raw screen coordinate. Secure recording on password fields; data-driven Test wizard to fetch script data from external source; provision to add GUI, Database and File checkpoints and verify database tables, files, page titles and HTML element properties. Supports keyword-driven testing, built-in exception handling and reporting facility. Works with a variety of browsers and OS's. Free and professional versions available.
AppPerfect DevSuite - Suite of testing, tuning, and monitoring products from AppPefect Corp. that includes a web functional testing module. Records browser interaction by element instead of screen co-ordinates. Supports handling dynamic content created by JavaScript; supports ASP, JSP, HTML, cookies, SSL. For Windows and MSIE; integrates with a variety of IDE's.
JStudio SiteWalker - Test tool from Jarsch Software Studio allows capture/replay recording; fail definitions can be specified for each step of the automated workflow via JavaScript. JavaScript's Document Object Model enables full access to all document elements. Test data from any database or Excel spreadsheet can be mapped to enter values automatically into HTML form controls. HTML-based test result reports can be generated. Shareware for Windows/MSIE.
Test Complete Enterprise - Automated test tool from AutomatedQA Corp. includes web functional testing capabilities. Works with Internet Explorer. QEngine - Test tool from AdventNet enables functional testing of Web sites and Web-based applications. Record and playback capability; automatic recording of any Web browser events and translates into an Python editable scripts. Includes Script Editor, Application Map Editor to view and edit the map object properties. Supports multiple OS's and browsers.
actiWate - Java-based Web application testing environment from Actimind Inc. Advanced framework for writing test scripts in Java (similar to open-source frameworks like HttpUnit, HtmlUnit etc. but with extended API), and Test Writing Assistant - Web browser plug-in module to assist the test writing process. Freeware.
KUMO Editor - Toolset from Softmorning LTD for creation and editing of web macros and automated web tests. Includes syntax-coloring editor with intellisense, autocomplete, run-time debugging features. Macro recorder transforms any click to a C# directive. Page objects navigator allows browsing of hierarchy of web objects in a page. Enables creation of scenarios from spreadsheets; and loop, retry on error, robust handling of page modifications. Can export created .DLL and .EXE files to enable running web macros on demand and integration into other software frameworks. Multilingual for Asian, eastern and western European languages.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
The Goals of Bug Writing
The Goals of Bug Writing
As with all other forms of writing, it‘s important to remember who your audience is. Usually, the bug will be written for developers and for other QA engineers. Know also that other departments may need to understand the bug: marketing, tech support, etc. You can assume that everyone who may read the bug will have some understanding about the product which the bug is logged against and the technology used. They may not, however, know much about your specific test area.
1. Eliminate basic questions that a Development Engineer might have by including essential information.
2. Understand your audience: make steps understandable for other departments (tech support, marketing) or other testers not in your area.
3. Make searching easier for yourself and others.
4. Write in a way that demonstrates you‘ve done necessary isolation.
Title
Keep the title short and sweet. It should be as explicit as possible in as few words as possible. Think again about what the title will be used for and who might use it. A QA engineer might be looking to match their bug against what‘s already logged and needs a quick way to scan through the titles. Distill each bug to its crucial elements and put that in the title.
1. Try to write in a cause-and-effect manner (―When A is done, B happens,‖ or ―B happens when A is done.‖)
2. Avoid ambiguous wording, such as ―feature is broken/incorrect behavior/does not work,‖ etc. Instead, say precisely how it‘s broken, incorrect, or not working.
3. Use keywords that will make searching for the bug easier for yourself or someone else looking for duplicate bugs. Avoid using jargon, slang, or vocabulary that is too specific to your area.
4. When an assert appears in your bug, include the assert or a portion of the assert in the title(e.g., ―Assert, ‗index < fLength‘ when pasting text into very small text frame‖).
Description
This is where all of the information, the body of the bug, resides: Steps to Reproduce, Actual Results, Expected Results, and any other helpful or vital information regarding the bug.
Steps to Reproduce Bug reports usually suffer from two deficiencies:
1. Too many steps, often poorly organized. Having too many steps in a bug report makes it difficult to read and understand, especially given the confined viewing area in Vantive.
2. Too little information in a bug often leads to unnecessary extra effort and time. Often a bug will be sent back by the engineer as Cannot Reproduce as a result of the inability to follow poorly constructed steps in a valid bug. This can also indicate that you haven‘t completed enough isolation steps.
Essential information to include in Steps to Reproduce:
1. Setup variables: Indicate which printers, fonts, or drivers are necessary to reproduce this bug. Indicate the working OS, if it‘s essential information for reproducing this bug.
2. Environmental variables. For example, indicate if you are working in application or document mode.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
As with all other forms of writing, it‘s important to remember who your audience is. Usually, the bug will be written for developers and for other QA engineers. Know also that other departments may need to understand the bug: marketing, tech support, etc. You can assume that everyone who may read the bug will have some understanding about the product which the bug is logged against and the technology used. They may not, however, know much about your specific test area.
1. Eliminate basic questions that a Development Engineer might have by including essential information.
2. Understand your audience: make steps understandable for other departments (tech support, marketing) or other testers not in your area.
3. Make searching easier for yourself and others.
4. Write in a way that demonstrates you‘ve done necessary isolation.
Title
Keep the title short and sweet. It should be as explicit as possible in as few words as possible. Think again about what the title will be used for and who might use it. A QA engineer might be looking to match their bug against what‘s already logged and needs a quick way to scan through the titles. Distill each bug to its crucial elements and put that in the title.
1. Try to write in a cause-and-effect manner (―When A is done, B happens,‖ or ―B happens when A is done.‖)
2. Avoid ambiguous wording, such as ―feature is broken/incorrect behavior/does not work,‖ etc. Instead, say precisely how it‘s broken, incorrect, or not working.
3. Use keywords that will make searching for the bug easier for yourself or someone else looking for duplicate bugs. Avoid using jargon, slang, or vocabulary that is too specific to your area.
4. When an assert appears in your bug, include the assert or a portion of the assert in the title(e.g., ―Assert, ‗index < fLength‘ when pasting text into very small text frame‖).
Description
This is where all of the information, the body of the bug, resides: Steps to Reproduce, Actual Results, Expected Results, and any other helpful or vital information regarding the bug.
Steps to Reproduce Bug reports usually suffer from two deficiencies:
1. Too many steps, often poorly organized. Having too many steps in a bug report makes it difficult to read and understand, especially given the confined viewing area in Vantive.
2. Too little information in a bug often leads to unnecessary extra effort and time. Often a bug will be sent back by the engineer as Cannot Reproduce as a result of the inability to follow poorly constructed steps in a valid bug. This can also indicate that you haven‘t completed enough isolation steps.
Essential information to include in Steps to Reproduce:
1. Setup variables: Indicate which printers, fonts, or drivers are necessary to reproduce this bug. Indicate the working OS, if it‘s essential information for reproducing this bug.
2. Environmental variables. For example, indicate if you are working in application or document mode.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Test Management System - test design and procedure development
Test Management System
Test design and procedure development
At the test design stage, testers will create a description of each test, document its scope and objective, and include any information that helps illustrate the purpose of a specific test such as requirements documents, functional specifications, etc. During the test development phase, testers will document detailed test execution steps and define the expected results for each step. The test management system will help in defining and documenting test cases by providing standard Web-based, pre-formatted template forms with fields based on the product and component information for editing and creating test cases. These get posted to the centralized database. This enables standardization and consistency across the testing team. It will also help in linking to the requirements specification to ensure traceability and test coverage. The test case may have been created due to a known defect and gets an association created with that defect. We can define the sequence in which test cases should be executed. This may be based on functional dependencies or some other factors like risk and other priority.
Organization
To verify application functionality and usability, tests have to realistically emulate end-user behavior. To achieve this, test execution should follow predefined logic, such as running certain tests after other tests have passed, failed, or been completed. For example, a user logs into the system, enters a new order and then exits the system. To emulate this simple business process, it makes sense to run the tests following the exact same sequence: log in, insert order, log out. The execution logic rules should be set prior to executing the actual tests.
Review
Once the test cases have been created, we can get them reviewed by required team members and customers. It is easy to communicate the test cases to the team because of the Web interface. This will verify the test cases developed by the test team and improve them further if required before actual testing begins.
Execution
The test cases can be accessed from any computer over the intranet/Internet, depending on how the test management tool is deployed. The test management system will help in locating a test case and provide a Web interface to process the test case. As the test is processed, the tester can immediately log the actual results along with pass/fail results and additional comments. The Web-based process supports parallel execution of test cases by many team members, which is not possible with a single flat file that gets "routed" around. If the test ever fails, it has an associated defect number which the tester can look at to see if a previous defect report should be opened, or a new one created. This helps to ensure that nothing falls through the cracks.
Maintenance
The test management system will maintain an accurate history of each run, including execution configuration, date and time of run, who ran the test, and any defects that were uncovered during the run.
Defect management
When a test case fails, the tester can enter the ID of the defect that caused the case to fail. The defect is, of course, inserted into the defect tracking system. The defect can be linked with the test cases. This will provide information to reproduce and analyze the defects.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Test design and procedure development
At the test design stage, testers will create a description of each test, document its scope and objective, and include any information that helps illustrate the purpose of a specific test such as requirements documents, functional specifications, etc. During the test development phase, testers will document detailed test execution steps and define the expected results for each step. The test management system will help in defining and documenting test cases by providing standard Web-based, pre-formatted template forms with fields based on the product and component information for editing and creating test cases. These get posted to the centralized database. This enables standardization and consistency across the testing team. It will also help in linking to the requirements specification to ensure traceability and test coverage. The test case may have been created due to a known defect and gets an association created with that defect. We can define the sequence in which test cases should be executed. This may be based on functional dependencies or some other factors like risk and other priority.
Organization
To verify application functionality and usability, tests have to realistically emulate end-user behavior. To achieve this, test execution should follow predefined logic, such as running certain tests after other tests have passed, failed, or been completed. For example, a user logs into the system, enters a new order and then exits the system. To emulate this simple business process, it makes sense to run the tests following the exact same sequence: log in, insert order, log out. The execution logic rules should be set prior to executing the actual tests.
Review
Once the test cases have been created, we can get them reviewed by required team members and customers. It is easy to communicate the test cases to the team because of the Web interface. This will verify the test cases developed by the test team and improve them further if required before actual testing begins.
Execution
The test cases can be accessed from any computer over the intranet/Internet, depending on how the test management tool is deployed. The test management system will help in locating a test case and provide a Web interface to process the test case. As the test is processed, the tester can immediately log the actual results along with pass/fail results and additional comments. The Web-based process supports parallel execution of test cases by many team members, which is not possible with a single flat file that gets "routed" around. If the test ever fails, it has an associated defect number which the tester can look at to see if a previous defect report should be opened, or a new one created. This helps to ensure that nothing falls through the cracks.
Maintenance
The test management system will maintain an accurate history of each run, including execution configuration, date and time of run, who ran the test, and any defects that were uncovered during the run.
Defect management
When a test case fails, the tester can enter the ID of the defect that caused the case to fail. The defect is, of course, inserted into the defect tracking system. The defect can be linked with the test cases. This will provide information to reproduce and analyze the defects.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Bug Logging , Writing , Reporting
Bug Logging / Writing / Reporting
The first aim of a bug report is to let the programmer see the failure with their own eyes. If you can't be with them to make it fail in front of them, give them detailed instructions so that they can make it fail for themselves.
In case the first aim doesn't succeed, and the programmer can't see it failing themselves, the second aim of a bug report is to describe what went wrong. Describe everything in detail.State what you saw, and also state what you expected to see. Write down the error messages, especially if they have numbers in.
When your computer does something unexpected, freeze. Do nothing until you're calm, and don't do anything that you think might be dangerous. By all means try to diagnose the fault yourself if you think you can, but if you do, you should still report the symptoms as well.
Be ready to provide extra information if the programmer needs it. If they didn't need it, they wouldn't be asking for it. They aren't being deliberately awkward. Have version numbers at your fingertips, because they will probably be needed. Write clearly. Say what you mean, and make sure it can't be misinterpreted. Above all, be precise. Programmers like precision. Useful bug reports are ones that get bugs fixed. A useful bug report normally has two qualities:
1. Reproducible. If an engineer can't see it or conclusively prove that it exists, the engineer will probably stamp it WORKSFORME or INVALID, and move on to the next bug. Every relevant detail you can provide helps.
2. Specific. The quicker the engineer can isolate the issue to a specific problem, the more likely it'll be expediently fixed. If you're crashing on a site, please take the time to isolate what on the page is triggering the crash, and include it as an HTML snippet in the bug report if possible. (Specific bugs have the added bonus of remaining relevant when an engineer actually gets to them; in a rapidly changing web, a bug report of "foo.com crashes my browser" becomes meaningless after the site experiences a half-dozen redesigns and hundreds of content changes.)
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
The first aim of a bug report is to let the programmer see the failure with their own eyes. If you can't be with them to make it fail in front of them, give them detailed instructions so that they can make it fail for themselves.
In case the first aim doesn't succeed, and the programmer can't see it failing themselves, the second aim of a bug report is to describe what went wrong. Describe everything in detail.State what you saw, and also state what you expected to see. Write down the error messages, especially if they have numbers in.
When your computer does something unexpected, freeze. Do nothing until you're calm, and don't do anything that you think might be dangerous. By all means try to diagnose the fault yourself if you think you can, but if you do, you should still report the symptoms as well.
Be ready to provide extra information if the programmer needs it. If they didn't need it, they wouldn't be asking for it. They aren't being deliberately awkward. Have version numbers at your fingertips, because they will probably be needed. Write clearly. Say what you mean, and make sure it can't be misinterpreted. Above all, be precise. Programmers like precision. Useful bug reports are ones that get bugs fixed. A useful bug report normally has two qualities:
1. Reproducible. If an engineer can't see it or conclusively prove that it exists, the engineer will probably stamp it WORKSFORME or INVALID, and move on to the next bug. Every relevant detail you can provide helps.
2. Specific. The quicker the engineer can isolate the issue to a specific problem, the more likely it'll be expediently fixed. If you're crashing on a site, please take the time to isolate what on the page is triggering the crash, and include it as an HTML snippet in the bug report if possible. (Specific bugs have the added bonus of remaining relevant when an engineer actually gets to them; in a rapidly changing web, a bug report of "foo.com crashes my browser" becomes meaningless after the site experiences a half-dozen redesigns and hundreds of content changes.)
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Automated Testing - Manual Vs Automation
Automated Testing - Manual Vs Automation
If you‘re only going to run the test one or two times or the test is really expensive to automation, it is most likely a manual test. But then again, what good is saying ―use common sense‖ when you need to come up with deterministic set of guidelines on how and when to automate?
Pros of Automation
If you have to run a set of tests repeatedly, automation is a huge win for you
It gives you the ability to run automation against code that frequently changes to catch regressions in a timely manner
It gives you the ability to run automation in mainstream scenarios to catch regressions in a timely manner
Aids in testing a large test matrix (different languages on different OS platforms).
Automated tests can be run at the same time on different machines, whereas the manual tests would have to be run sequentially.
Cons of Automation
It costs more to automate.
Writing the test cases and writing or configuring the automate framework you‘re using costs more initially than running the test manually.
Can‘t automate visual references, for example, if you can‘t tell the font color via code or the automation tool, it is a manual test.
Pros of Manual
If the test case only runs twice a coding milestone, it most likely should be a manual test. Less cost than automating it.
It allows the tester to perform more ad-hoc (random testing). More bugs are found via ad-hoc than via automation. And, the more time a tester spends playing with the feature, the greater the odds of finding real user bugs.
Cons of Manual
1. Running tests manually can be very time consuming
2. Each time there is a new build, the tester must rerun all required tests - which after a while would become very mundane and tiresome.
Other deciding factors
1. What you automate depends on the tools you use. If the tools have any limitations, those tests are manual.
2. Is the return on investment worth automating?
3. Is what you get out of automation worth the cost of setting up and supporting the test cases, the automation framework, and the system that runs the test cases?
Criteria for automating
There are two sets of questions to determine whether automation is right for your test case: Is this test scenario automatable?
1. Yes, and it will cost a little
2. Yes, but it will cost a lot
3. No, it is no possible to automate
How important is this test scenario?
1. I must absolutely test this scenario whenever possible
2. I need to test this scenario regularly
3. I only need to test this scenario once in a while
If you answered #1 to both questions – definitely automate that test
If you answered #1 or #2 to both questions – you should automate that test
If you answered #2 to both questions – you need to consider if it is really worth the investment to automate
What happens if you can‘t automate?
Let‘s say that you have a test that you absolutely need to run whenever possible, but it isn‘t possible to automate. Your options are
Reevaluate – do I really need to run this test this often?
What‘s the cost of doing this test manually?
Look for new testing tools
Consider test hooks
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
If you‘re only going to run the test one or two times or the test is really expensive to automation, it is most likely a manual test. But then again, what good is saying ―use common sense‖ when you need to come up with deterministic set of guidelines on how and when to automate?
Pros of Automation
If you have to run a set of tests repeatedly, automation is a huge win for you
It gives you the ability to run automation against code that frequently changes to catch regressions in a timely manner
It gives you the ability to run automation in mainstream scenarios to catch regressions in a timely manner
Aids in testing a large test matrix (different languages on different OS platforms).
Automated tests can be run at the same time on different machines, whereas the manual tests would have to be run sequentially.
Cons of Automation
It costs more to automate.
Writing the test cases and writing or configuring the automate framework you‘re using costs more initially than running the test manually.
Can‘t automate visual references, for example, if you can‘t tell the font color via code or the automation tool, it is a manual test.
Pros of Manual
If the test case only runs twice a coding milestone, it most likely should be a manual test. Less cost than automating it.
It allows the tester to perform more ad-hoc (random testing). More bugs are found via ad-hoc than via automation. And, the more time a tester spends playing with the feature, the greater the odds of finding real user bugs.
Cons of Manual
1. Running tests manually can be very time consuming
2. Each time there is a new build, the tester must rerun all required tests - which after a while would become very mundane and tiresome.
Other deciding factors
1. What you automate depends on the tools you use. If the tools have any limitations, those tests are manual.
2. Is the return on investment worth automating?
3. Is what you get out of automation worth the cost of setting up and supporting the test cases, the automation framework, and the system that runs the test cases?
Criteria for automating
There are two sets of questions to determine whether automation is right for your test case: Is this test scenario automatable?
1. Yes, and it will cost a little
2. Yes, but it will cost a lot
3. No, it is no possible to automate
How important is this test scenario?
1. I must absolutely test this scenario whenever possible
2. I need to test this scenario regularly
3. I only need to test this scenario once in a while
If you answered #1 to both questions – definitely automate that test
If you answered #1 or #2 to both questions – you should automate that test
If you answered #2 to both questions – you need to consider if it is really worth the investment to automate
What happens if you can‘t automate?
Let‘s say that you have a test that you absolutely need to run whenever possible, but it isn‘t possible to automate. Your options are
Reevaluate – do I really need to run this test this often?
What‘s the cost of doing this test manually?
Look for new testing tools
Consider test hooks
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Automated Testing
Automated Testing
Test automation is used to replace or supplement traditional manual software testing with a suite of test programs. Benefits to QA Engineers include increased software quality, repeatable test procedures, and reduced testing costs. Essentially, software automation testing is using a computer system instead of a human to test a software application. Most other forms of software testing require human interaction with the software product under development
Automated Testing is done to:
1. REDUCE TESTING TIME. A typical automated test suite will run in less than 24 hours. For a sophisticated product, manual testing may require dozens of staff months to perform the same testing.
2. CONSISTENT TEST PROCEDURES. With a complex testing process manual testing often yields inconsistent coverage and results depending on the staff and schedule employed. An automated test suite ensures the same scope and process is used repeatedly each time testing is performed
3. REDUCED QA COSTS. Automated testing has an upfront cost to develop, but over the lifetime of a product it will offer substantial net savings. An average automated test suite development is 3-5 times the cost of a complete manual test cycle. Over multiple product releases with multiple cycles per release, this cost is quickly recouped
4. IMPROVED TESTING PRODUCTIVITY. With its much shorter execution time an automated test suite can be run multiple times over the course of a product development cycle
5. IMPROVED PRODUCT QUALITY. Automated testing detects functional and performance issues more efficiently
Points to consider:
1. it's important to define the purpose of taking on a test automation effort. There are several categories of testing tools each with its own purpose. Identifying what you want to automate and where in the testing life cycle will be the first step in developing a test automation strategy. Just wishing that everything should be tested faster is not a practical strategy. You need to be specific
2. Developing a test automation strategy is very important in mapping out what's to be automated, how it's going to be done, how the scripts will be maintained and what the expected costs and benefits will be
3. Many of the testing 'tools' provided by vendors are very sophisticated and use coding 'languages'. Treat the entire process of automating testing as you would any other software development effort. This includes defining what should be automated, (the requirements phase), designing test automation, writing the scripts, testing the scripts,etc. The scripts need to be maintained over the life of the product just as any program would require maintenance
4. The effort of test automation is an investment. More time and resources are needed. The benefit comes from running these automated tests every subsequent release. Therefore, ensuring that the scripts can be easily maintained becomes very important
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Test automation is used to replace or supplement traditional manual software testing with a suite of test programs. Benefits to QA Engineers include increased software quality, repeatable test procedures, and reduced testing costs. Essentially, software automation testing is using a computer system instead of a human to test a software application. Most other forms of software testing require human interaction with the software product under development
Automated Testing is done to:
1. REDUCE TESTING TIME. A typical automated test suite will run in less than 24 hours. For a sophisticated product, manual testing may require dozens of staff months to perform the same testing.
2. CONSISTENT TEST PROCEDURES. With a complex testing process manual testing often yields inconsistent coverage and results depending on the staff and schedule employed. An automated test suite ensures the same scope and process is used repeatedly each time testing is performed
3. REDUCED QA COSTS. Automated testing has an upfront cost to develop, but over the lifetime of a product it will offer substantial net savings. An average automated test suite development is 3-5 times the cost of a complete manual test cycle. Over multiple product releases with multiple cycles per release, this cost is quickly recouped
4. IMPROVED TESTING PRODUCTIVITY. With its much shorter execution time an automated test suite can be run multiple times over the course of a product development cycle
5. IMPROVED PRODUCT QUALITY. Automated testing detects functional and performance issues more efficiently
Points to consider:
1. it's important to define the purpose of taking on a test automation effort. There are several categories of testing tools each with its own purpose. Identifying what you want to automate and where in the testing life cycle will be the first step in developing a test automation strategy. Just wishing that everything should be tested faster is not a practical strategy. You need to be specific
2. Developing a test automation strategy is very important in mapping out what's to be automated, how it's going to be done, how the scripts will be maintained and what the expected costs and benefits will be
3. Many of the testing 'tools' provided by vendors are very sophisticated and use coding 'languages'. Treat the entire process of automating testing as you would any other software development effort. This includes defining what should be automated, (the requirements phase), designing test automation, writing the scripts, testing the scripts,etc. The scripts need to be maintained over the life of the product just as any program would require maintenance
4. The effort of test automation is an investment. More time and resources are needed. The benefit comes from running these automated tests every subsequent release. Therefore, ensuring that the scripts can be easily maintained becomes very important
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Tuesday, December 16, 2008
White Box Testing Techniques
White Box Testing Techniques
1. Basis Path Testing
a. Flow Graph Notation
b. Cyclomatic Complexity
c. Deriving Test Cases
d. Graph Matrices
2. Control Structure testing
a. Conditions Testing
b. Data Flow Testing
c. Loop Testing
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
1. Basis Path Testing
a. Flow Graph Notation
b. Cyclomatic Complexity
c. Deriving Test Cases
d. Graph Matrices
2. Control Structure testing
a. Conditions Testing
b. Data Flow Testing
c. Loop Testing
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Usability Testing
Usability testing is a special form of testing that looks for bugs not in the functionality of the program, but in the layout and utility of the user interface. This step is often performed on a prototype before the actual system code is written, so it is easy to change if needed. For usability testing, you need to plan:
01. How you will choose the users for the test (what is a representative sample of your real user population?)
02. A set of tasks for the users to perform representing paths through your system showing key functionality
03. A method for getting feedback from the users - surveys, interviews, data collection in the system itself
04. How you will analyze the data you collected in order to make improvements to the user interface
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
01. How you will choose the users for the test (what is a representative sample of your real user population?)
02. A set of tasks for the users to perform representing paths through your system showing key functionality
03. A method for getting feedback from the users - surveys, interviews, data collection in the system itself
04. How you will analyze the data you collected in order to make improvements to the user interface
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Types of Black Box Testing
Acceptance Testing / User Acceptance Testing - UAT Formal tests (often performed by a customer) to determine whether or not a system has satisfied predetermined acceptance criteria. These tests are often used to enable the customer (either internal or external) to determine whether or not to accept a system.
The test procedures that lead to formal 'acceptance' of new or changed systems. User Acceptance Testing is a critical phase of any 'systems' project and requires significant participation by the 'End Users'. To be of real use, an Acceptance Test Plan should be developed in order to plan precisely, and in detail, the means by which 'Acceptance' will be achieved. The final part of the UAT can also include a parallel run to prove the system against the current system.
The User Acceptance Test Plan will vary from system to system but, in general, the testing should be planned in order to provide a realistic and adequate exposure of the system to all reasonably expected events. The testing can be based upon the User Requirements Specification to which the system should conform.
As in any system though, problems will arise and it is important to have determined what will be the expected and required responses from the various parties concerned; including Users; Project Team; Vendors and possibly Consultants / Contractors.
In order to agree what such responses should be, the End Users and the Project Team need to develop and agree a range of 'Severity Levels'. These levels will range from (say) 1 to 6 and will represent the relative severity, in terms of business / commercial impact, of a problem with the system, found during testing. Here is an example which has been used successfully; '1' is the most severe; and '6' has the least impact :-
'Show Stopper' i.e. it is impossible to continue with the testing because of the severity of this error / bug
Critical Problem; testing can continue but we cannot go into production (live) with this problem
Major Problem; testing can continue but live this feature will cause severe disruption to business processes in live operation
Medium Problem; testing can continue and the system is likely to go live with only minimal departure from agreed business processes
Minor Problem ; both testing and live operations may progress. This problem should be corrected, but little or no changes to business processes are envisaged
'Cosmetic' Problem e.g. colours; fonts; pitch size However, if such features are key to the business requirements they will warrant a higher severity level.
The users of the system, in consultation with the executive sponsor of the project, must then agree upon the responsibilities and required actions for each category of problem. For example, you may demand that any problems in severity level 1, receive priority response and that all testing will cease until such level 1 problems are resolved.
Caution. Even where the severity levels and the responses to each have been agreed by all parties; the allocation of a problem into its appropriate severity level can be subjective and open to question. To avoid the risk of lengthy and protracted exchanges over the categorisation of problems; we strongly advised that a range of examples are agreed in advance to ensure that there are no fundamental areas of disagreement; or, or if there are, these will be known in advance and your organisation is forewarned.
Finally, it is crucial to agree the Criteria for Acceptance. Because no system is entirely fault free, it must be agreed between End User and vendor, the maximum number of acceptable 'outstandings' in any particular category. Again, prior consideration of this is advisable.
N.B. In some cases, users may agree to accept ('sign off') the system subject to a range of conditions. These conditions need to be analysed as they may, perhaps unintentionally, seek additional functionality which could be classified as scope creep. In any event, any and all fixes from the software developers, must be subjected to rigorous System Testing and, where appropriate Regression Testing.
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
The test procedures that lead to formal 'acceptance' of new or changed systems. User Acceptance Testing is a critical phase of any 'systems' project and requires significant participation by the 'End Users'. To be of real use, an Acceptance Test Plan should be developed in order to plan precisely, and in detail, the means by which 'Acceptance' will be achieved. The final part of the UAT can also include a parallel run to prove the system against the current system.
The User Acceptance Test Plan will vary from system to system but, in general, the testing should be planned in order to provide a realistic and adequate exposure of the system to all reasonably expected events. The testing can be based upon the User Requirements Specification to which the system should conform.
As in any system though, problems will arise and it is important to have determined what will be the expected and required responses from the various parties concerned; including Users; Project Team; Vendors and possibly Consultants / Contractors.
In order to agree what such responses should be, the End Users and the Project Team need to develop and agree a range of 'Severity Levels'. These levels will range from (say) 1 to 6 and will represent the relative severity, in terms of business / commercial impact, of a problem with the system, found during testing. Here is an example which has been used successfully; '1' is the most severe; and '6' has the least impact :-
'Show Stopper' i.e. it is impossible to continue with the testing because of the severity of this error / bug
Critical Problem; testing can continue but we cannot go into production (live) with this problem
Major Problem; testing can continue but live this feature will cause severe disruption to business processes in live operation
Medium Problem; testing can continue and the system is likely to go live with only minimal departure from agreed business processes
Minor Problem ; both testing and live operations may progress. This problem should be corrected, but little or no changes to business processes are envisaged
'Cosmetic' Problem e.g. colours; fonts; pitch size However, if such features are key to the business requirements they will warrant a higher severity level.
The users of the system, in consultation with the executive sponsor of the project, must then agree upon the responsibilities and required actions for each category of problem. For example, you may demand that any problems in severity level 1, receive priority response and that all testing will cease until such level 1 problems are resolved.
Caution. Even where the severity levels and the responses to each have been agreed by all parties; the allocation of a problem into its appropriate severity level can be subjective and open to question. To avoid the risk of lengthy and protracted exchanges over the categorisation of problems; we strongly advised that a range of examples are agreed in advance to ensure that there are no fundamental areas of disagreement; or, or if there are, these will be known in advance and your organisation is forewarned.
Finally, it is crucial to agree the Criteria for Acceptance. Because no system is entirely fault free, it must be agreed between End User and vendor, the maximum number of acceptable 'outstandings' in any particular category. Again, prior consideration of this is advisable.
N.B. In some cases, users may agree to accept ('sign off') the system subject to a range of conditions. These conditions need to be analysed as they may, perhaps unintentionally, seek additional functionality which could be classified as scope creep. In any event, any and all fixes from the software developers, must be subjected to rigorous System Testing and, where appropriate Regression Testing.
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Testing Documents
a. User Requirements Specification - URS
The User Requirements Specification is a document produced by, or on behalf of your organisation, which documents the purposes for which a system is required - its functional requirements - usually in order of priority / gradation.
Whilst the URS will not usually probe the technical specification, it will nevertheless outline the expectations and, where essential may provide further detail e.g. the User Interface, say Microsoft Windows®, and the expected hardware platform etc.
The URS is an essential document which outlines precisely what the User (or customer) is expecting from this system. The term User Requirement Specification can also incorporate the functional requirements of the system or may be in a separate document labelled the Functional Requirements Specification - the FRS.
b. Test Plan
A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:
Title
Identification of software including version/release numbers
Revision history of document including authors, dates, approvals
Table of Contents
Purpose of document, intended audience
Objective of testing effort
Software product overview
Relevant related document list, such as requirements, design documents, other test plans, etc.
Relevant standards or legal requirements
Traceability requirements
Relevant naming conventions and identifier conventions
Overall software project organization and personnel/contact-info/responsibilties
Test organization and personnel/contact-info/responsibilities
Assumptions and dependencies
Project risk analysis
Testing priorities and focus
Scope and limitations of testing
Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
Outline of data input equivalence classes, boundary value analysis, error classes
Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems
Test environment validity analysis - differences between the test and production systems and their impact on test validity.
Test environment setup and configuration issues
Software migration processes
Software CM processes
Test data setup requirements
Database setup requirements
Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
Test automation - justification and overview Test tools to be used, including versions, patches, etc.
Test script/test code maintenance processes and version control
c. Test Case
01. A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
02. Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.
d. Test Script
A test script is basically a script of test cases linked together to walk the critical pathways through an application under test. These scripts are broken down into a series of user scenarios. Each scenario contains specific instructions for the tester to carry out, including the expected results along the way
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
The User Requirements Specification is a document produced by, or on behalf of your organisation, which documents the purposes for which a system is required - its functional requirements - usually in order of priority / gradation.
Whilst the URS will not usually probe the technical specification, it will nevertheless outline the expectations and, where essential may provide further detail e.g. the User Interface, say Microsoft Windows®, and the expected hardware platform etc.
The URS is an essential document which outlines precisely what the User (or customer) is expecting from this system. The term User Requirement Specification can also incorporate the functional requirements of the system or may be in a separate document labelled the Functional Requirements Specification - the FRS.
b. Test Plan
A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:
Title
Identification of software including version/release numbers
Revision history of document including authors, dates, approvals
Table of Contents
Purpose of document, intended audience
Objective of testing effort
Software product overview
Relevant related document list, such as requirements, design documents, other test plans, etc.
Relevant standards or legal requirements
Traceability requirements
Relevant naming conventions and identifier conventions
Overall software project organization and personnel/contact-info/responsibilties
Test organization and personnel/contact-info/responsibilities
Assumptions and dependencies
Project risk analysis
Testing priorities and focus
Scope and limitations of testing
Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
Outline of data input equivalence classes, boundary value analysis, error classes
Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems
Test environment validity analysis - differences between the test and production systems and their impact on test validity.
Test environment setup and configuration issues
Software migration processes
Software CM processes
Test data setup requirements
Database setup requirements
Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
Test automation - justification and overview Test tools to be used, including versions, patches, etc.
Test script/test code maintenance processes and version control
c. Test Case
01. A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
02. Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.
d. Test Script
A test script is basically a script of test cases linked together to walk the critical pathways through an application under test. These scripts are broken down into a series of user scenarios. Each scenario contains specific instructions for the tester to carry out, including the expected results along the way
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Test Data Categorization Techniques
1. Equivalence Partitioning
Divide the input domain into classes of data for which test cases can be generated.
Attempting to uncover classes of errors.
An equivalence class represents a set of valid or invalid states An input condition is either a specific numeric value, range of values, a set of related values, or a boolean condition.
Equivalence classes can be defined by:
If an input condition specifies a range or a specific value, one valid and two invalid equivalence classes defined.
If an input condition specifies a boolean or a member of a set, one valid and one invalid equivalence classes defined.
Test cases for each input domain data item developed and executed.
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Divide the input domain into classes of data for which test cases can be generated.
Attempting to uncover classes of errors.
An equivalence class represents a set of valid or invalid states An input condition is either a specific numeric value, range of values, a set of related values, or a boolean condition.
Equivalence classes can be defined by:
If an input condition specifies a range or a specific value, one valid and two invalid equivalence classes defined.
If an input condition specifies a boolean or a member of a set, one valid and one invalid equivalence classes defined.
Test cases for each input domain data item developed and executed.
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Some Testing Terms - part 2
6. SEI
'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.
7. CMM
'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable.
Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.
Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance.
Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.
Level 5 - the focus is on continouous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.
Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations was 100 software engineering/maintenance personnel; 32% of organizations were U.S. federal contractors or agencies. For those rated at Level 1, the most problematical key process area was in Software Quality Assurance.
8. ISO
'International Organisation for Standardization' - The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products - it indicates only that documented processes are followed. Also see http://www.iso.ch/ for the latest information. In the U.S. the standards can be purchased via the ASQ web site at http://e-standards.asq.org/
9. IEEE
'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.
10. ANSI
'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).
Other software development/IT management process assessment methods besides CMMI and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and CobiT.
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.
7. CMM
'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable.
Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.
Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance.
Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.
Level 5 - the focus is on continouous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.
Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations was 100 software engineering/maintenance personnel; 32% of organizations were U.S. federal contractors or agencies. For those rated at Level 1, the most problematical key process area was in Software Quality Assurance.
8. ISO
'International Organisation for Standardization' - The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products - it indicates only that documented processes are followed. Also see http://www.iso.ch/ for the latest information. In the U.S. the standards can be purchased via the ASQ web site at http://e-standards.asq.org/
9. IEEE
'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.
10. ANSI
'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).
Other software development/IT management process assessment methods besides CMMI and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and CobiT.
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Some Testing Terms - part 1
1. Validation:
The comparison between the actual characteristics of something (e.g. a product of a software project and the expected characteristics).Validation is checking that you have built the right system.
2. Verification:
The comparison between the actual characteristics of something (e.g. a product of a software project) and the specified characteristics.Verification is checking that we have built the system right.
3. Configuration Management
Configuration management covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes.
4. When to stop testing
This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
Deadlines (release deadlines, testing deadlines, etc.)
Test cases completed with certain percentage passed
Test budget depleted
Coverage of code/functionality/requirements reaches a specified point
Bug rate falls below a certain level
Beta or alpha testing period ends
5. Risk Analysis/ Identifying Test Cases
Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:
01 Which functionality is most important to the project's intended purpose?
02 Which functionality is most visible to the user?
03 Which functionality has the largest safety impact?
04 Which functionality has the largest financial impact on users?
05 Which aspects of the application are most important to the customer?
06 Which aspects of the application can be tested early in the development cycle?
07 Which parts of the code are most complex, and thus most subject to errors?
08 Which parts of the application were developed in rush or panic mode?
09 Which aspects of similar/related previous projects caused problems?
10 Which aspects of similar/related previous projects had large maintenance expenses?
11 Which parts of the requirements and design are unclear or poorly thought out?
12 What do the developers think are the highest-risk aspects of the application?
13 What kinds of problems would cause the worst publicity?
14 What kinds of problems would cause the most customer service complaints?
15 What kinds of tests could easily cover multiple functionalities?
16 Which tests will have the best high-risk-coverage to time-required ratio?
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
The comparison between the actual characteristics of something (e.g. a product of a software project and the expected characteristics).Validation is checking that you have built the right system.
2. Verification:
The comparison between the actual characteristics of something (e.g. a product of a software project) and the specified characteristics.Verification is checking that we have built the system right.
3. Configuration Management
Configuration management covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes.
4. When to stop testing
This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
Deadlines (release deadlines, testing deadlines, etc.)
Test cases completed with certain percentage passed
Test budget depleted
Coverage of code/functionality/requirements reaches a specified point
Bug rate falls below a certain level
Beta or alpha testing period ends
5. Risk Analysis/ Identifying Test Cases
Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:
01 Which functionality is most important to the project's intended purpose?
02 Which functionality is most visible to the user?
03 Which functionality has the largest safety impact?
04 Which functionality has the largest financial impact on users?
05 Which aspects of the application are most important to the customer?
06 Which aspects of the application can be tested early in the development cycle?
07 Which parts of the code are most complex, and thus most subject to errors?
08 Which parts of the application were developed in rush or panic mode?
09 Which aspects of similar/related previous projects caused problems?
10 Which aspects of similar/related previous projects had large maintenance expenses?
11 Which parts of the requirements and design are unclear or poorly thought out?
12 What do the developers think are the highest-risk aspects of the application?
13 What kinds of problems would cause the worst publicity?
14 What kinds of problems would cause the most customer service complaints?
15 What kinds of tests could easily cover multiple functionalities?
16 Which tests will have the best high-risk-coverage to time-required ratio?
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Control Structure testing
a. Conditions Testing
Condition testing aims to exercise all logical conditions in a program module.
Errors in expressions can be due to:
1. Boolean operator error
2. Boolean variable error
3. Boolean parenthesis error
4. Relational operator error
5. Arithmetic expression error
Condition testing methods focus on testing each condition in the program. Strategies proposed include:
Branch testing - execute every branch at least once.
Domain Testing - uses three or four tests for every relational operator.
Branch and relational operator testing - uses condition constraints
Coverage of the constraint set guarantees detection of relational operator errors.
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Condition testing aims to exercise all logical conditions in a program module.
Errors in expressions can be due to:
1. Boolean operator error
2. Boolean variable error
3. Boolean parenthesis error
4. Relational operator error
5. Arithmetic expression error
Condition testing methods focus on testing each condition in the program. Strategies proposed include:
Branch testing - execute every branch at least once.
Domain Testing - uses three or four tests for every relational operator.
Branch and relational operator testing - uses condition constraints
Coverage of the constraint set guarantees detection of relational operator errors.
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Bugs
Bug
A programming error that causes a program to work poorly, produce incorrect results, or crash. (As an interesting aside, the term "bug" was coined when a real insect damaged one of the circuits of the first electronic digital computer, the ENIAC.)
Isolation
Think about the sequence of events that you just took the software through to eventually get to the problem. Ideally, you started testing by clicking one button, and then saw the problem immediately. More likely, though, you had been testing for a while, possibly for hours. Perhaps the last thing you did is the only thing required to reproduce the bug, or maybe you have to repeat hours of testing. Until you can prove that a simpler scenario is sufficient, you have to assume that every detail of your testing session is relevant. Your task is to rule out as many of those details as you can as not being relevant to the problem.
There are several different things that are subject to simplification. Consider:
Procedures. This is usually what testers focus on – shortening the step-by-step interaction with the system.
Inputs. This is all the data that you feed to the program, such as a command-line argument, a text field in a GUI interface, a file, or database. You want to also reduce this data to the smallest data set that still reproduces the problem.
Configuration. What options have you selected that are different from the default configuration? If you can't reproduce the problem the way the software is configured out of the box, find the few crucial settings that are necessary for the problem to show up.
Platforms. Can you reproduce the problem on all of the operating systems, operating system versions, and hardware combinations that are supported? If not, then you've found an important clue. Also, what about other software that is running, and their versions? Many bugs are not platform-specific, and testing on other platforms can sometimes be difficult, so this area often isn't thoroughly explored.
Other state information. The items above probably don't capture every possible relevant variable. Look for other things that might vary from one system to another and cause the bug to manifest on some systems but not others
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
A programming error that causes a program to work poorly, produce incorrect results, or crash. (As an interesting aside, the term "bug" was coined when a real insect damaged one of the circuits of the first electronic digital computer, the ENIAC.)
Isolation
Think about the sequence of events that you just took the software through to eventually get to the problem. Ideally, you started testing by clicking one button, and then saw the problem immediately. More likely, though, you had been testing for a while, possibly for hours. Perhaps the last thing you did is the only thing required to reproduce the bug, or maybe you have to repeat hours of testing. Until you can prove that a simpler scenario is sufficient, you have to assume that every detail of your testing session is relevant. Your task is to rule out as many of those details as you can as not being relevant to the problem.
There are several different things that are subject to simplification. Consider:
Procedures. This is usually what testers focus on – shortening the step-by-step interaction with the system.
Inputs. This is all the data that you feed to the program, such as a command-line argument, a text field in a GUI interface, a file, or database. You want to also reduce this data to the smallest data set that still reproduces the problem.
Configuration. What options have you selected that are different from the default configuration? If you can't reproduce the problem the way the software is configured out of the box, find the few crucial settings that are necessary for the problem to show up.
Platforms. Can you reproduce the problem on all of the operating systems, operating system versions, and hardware combinations that are supported? If not, then you've found an important clue. Also, what about other software that is running, and their versions? Many bugs are not platform-specific, and testing on other platforms can sometimes be difficult, so this area often isn't thoroughly explored.
Other state information. The items above probably don't capture every possible relevant variable. Look for other things that might vary from one system to another and cause the bug to manifest on some systems but not others
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Basis Path Testing
Basis Path Testing
Aim is to derive a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths.
Test cases which exercise basic set will execute every statement at least once.
a. Flow Graph Notation
Notation for representing control flow
On a flow graph:
1. Arrows called edges represent flow of control
2. Circles called nodes represent one or more actions.
3. Areas bounded by edges and nodes called regions.
4. A predicate node is a node containing a condition
Any procedural design can be translated into a flow graph.Note that compound Boolean expressions at tests generate at least two predicate node and additional arcs.
b. Cyclomatic Complexity
The cyclomatic complexity gives a quantitative measure of the logical complexity.
This value gives the number of independent paths in the basis set, and an upper bound for the number of tests to ensure that each statement is executed at least once.
An independent path is any path through a program that introduces at least one new set of processing statements or a new condition (i.e., a new edge)
Example has:
1. Cyclomatic Complexity of 4. Can be calculated as:
1. Number of regions of flow graph.
2. #Edges - #Nodes + 2
3. #Predicate Nodes + 1
2. Independent Paths:
1. 1, 8
2. 1, 2, 3, 7b, 1, 8
3. 1, 2, 4, 5, 7a, 7b, 1, 8
4. 1, 2, 4, 6, 7a, 7b, 1, 8
Cyclomatic complexity provides upper bound for number of tests required to guarantee coverage of all program statements.
c. Deriving Test Cases
1. Using the design or code, draw the corresponding flow graph.
2. Determine the cyclomatic complexity of the flow graph.
3. Determine a basis set of independent paths.
4. Prepare test cases that will force execution of each path in the basis set.
Note: some paths may only be able to be executed as part of another test.
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Aim is to derive a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths.
Test cases which exercise basic set will execute every statement at least once.
a. Flow Graph Notation
Notation for representing control flow
On a flow graph:
1. Arrows called edges represent flow of control
2. Circles called nodes represent one or more actions.
3. Areas bounded by edges and nodes called regions.
4. A predicate node is a node containing a condition
Any procedural design can be translated into a flow graph.Note that compound Boolean expressions at tests generate at least two predicate node and additional arcs.
b. Cyclomatic Complexity
The cyclomatic complexity gives a quantitative measure of the logical complexity.
This value gives the number of independent paths in the basis set, and an upper bound for the number of tests to ensure that each statement is executed at least once.
An independent path is any path through a program that introduces at least one new set of processing statements or a new condition (i.e., a new edge)
Example has:
1. Cyclomatic Complexity of 4. Can be calculated as:
1. Number of regions of flow graph.
2. #Edges - #Nodes + 2
3. #Predicate Nodes + 1
2. Independent Paths:
1. 1, 8
2. 1, 2, 3, 7b, 1, 8
3. 1, 2, 4, 5, 7a, 7b, 1, 8
4. 1, 2, 4, 6, 7a, 7b, 1, 8
Cyclomatic complexity provides upper bound for number of tests required to guarantee coverage of all program statements.
c. Deriving Test Cases
1. Using the design or code, draw the corresponding flow graph.
2. Determine the cyclomatic complexity of the flow graph.
3. Determine a basis set of independent paths.
4. Prepare test cases that will force execution of each path in the basis set.
Note: some paths may only be able to be executed as part of another test.
Software Testing Training
Our Software Testing Partner
Software testing institute
corporate training software testing
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum
Wednesday, September 24, 2008
Glossary - M (part 1)
Metric: A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.
Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Glossary - L (part 1)
Load Testing: See Performance Testing.
Localization Testing: This term refers to making software specifically designed for a specific locality.
Loop Testing: A white box testing technique that exercises program loops.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Localization Testing: This term refers to making software specifically designed for a specific locality.
Loop Testing: A white box testing technique that exercises program loops.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Glossary - I (part 1)
Independent Test Group (ITG): A group of people whose primary responsibility is software testing,
Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Glossary - H (part 1)
High Order Tests: Black-box tests conducted once the software has been integrated.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Glossary - G (part 1)
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particular module,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Gorilla Testing: Testing one particular module,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Glossary - F (part 1)
Functional Decomposition: A technique used during planning, analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.
Testing the features and operational behavior of a product to ensure they correspond to its specifications.
Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.
Testing the features and operational behavior of a product to ensure they correspond to its specifications.
Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Glossary - E (part 1)
Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.
End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Equivalence Class: A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.
End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Equivalence Class: A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Glossary - D (part 1)
Data Dictionary: A database that contains definitions of all data items defined during analysis.
Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.
Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
Debugging: The process of finding and removing the causes of software failures.
Defect: Nonconformance to requirements or functional / program specification
Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
Depth Testing: A test that exercises a feature of a product in full detail.
Dynamic Testing: Testing software through executing it. See also Static Testing.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.
Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
Debugging: The process of finding and removing the causes of software failures.
Defect: Nonconformance to requirements or functional / program specification
Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
Depth Testing: A test that exercises a feature of a product in full detail.
Dynamic Testing: Testing software through executing it. See also Static Testing.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Glossary - C (part 1)
CAST: Computer Aided Software Testing.
Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.
CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
Coding: The generation of source code.
Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Component: A minimal software item for which a separate specification is available.
Component Testing: See Unit Testing.
Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Context Driven Testing: The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.
CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
Coding: The generation of source code.
Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Component: A minimal software item for which a separate specification is available.
Component Testing: See Unit Testing.
Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Context Driven Testing: The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Glossary - B (part 1)
B
Backus-Naur Form: A metalanguage used to formally describe the syntax of a language.
Basic Block: A sequence of one or more consecutive, executable statements containing no branches.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
Basis Set: The set of tests derived using basis path testing.
Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.
Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.
Beta Testing: Testing of a rerelease of a software product conducted by customers.
Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.
Boundary Value Analysis: BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.
Branch Testing: Testing in which all branches in the program source code are tested at least once.
Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Backus-Naur Form: A metalanguage used to formally describe the syntax of a language.
Basic Block: A sequence of one or more consecutive, executable statements containing no branches.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
Basis Set: The set of tests derived using basis path testing.
Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.
Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.
Beta Testing: Testing of a rerelease of a software product conducted by customers.
Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.
Boundary Value Analysis: BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.
Branch Testing: Testing in which all branches in the program source code are tested at least once.
Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.
Software Testing Training
Our Software Testing Partner
Software testing institute
For More Visit Site
http://www.qacampus.com
For discussion FORUM
http://www.qacampus.com/forum/index.php
Subscribe to:
Comments (Atom)