Metric: A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.
Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Monday, June 30, 2008
Glossary - M (part 1)
Metric: A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.
Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - L (part 1)
Load Testing: See Performance Testing.
Localization Testing: This term refers to making software specifically designed for a specific locality.
Loop Testing: A white box testing technique that exercises program loops.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Localization Testing: This term refers to making software specifically designed for a specific locality.
Loop Testing: A white box testing technique that exercises program loops.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - I (part 1)
Independent Test Group (ITG): A group of people whose primary responsibility is software testing,
Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - H (part 1)
High Order Tests: Black-box tests conducted once the software has been integrated.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - G (part 1)
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particular module,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Gorilla Testing: Testing one particular module,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - F (part 1)
Functional Decomposition: A technique used during planning, analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.
Testing the features and operational behavior of a product to ensure they correspond to its specifications.
Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.
Testing the features and operational behavior of a product to ensure they correspond to its specifications.
Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - E (part 1)
Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.
End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Equivalence Class: A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.
End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Equivalence Class: A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - D (part 1)
Data Dictionary: A database that contains definitions of all data items defined during analysis.
Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.
Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
Debugging: The process of finding and removing the causes of software failures.
Defect: Nonconformance to requirements or functional / program specification
Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
Depth Testing: A test that exercises a feature of a product in full detail.
Dynamic Testing: Testing software through executing it. See also Static Testing.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.
Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
Debugging: The process of finding and removing the causes of software failures.
Defect: Nonconformance to requirements or functional / program specification
Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
Depth Testing: A test that exercises a feature of a product in full detail.
Dynamic Testing: Testing software through executing it. See also Static Testing.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - C (part 1)
CAST: Computer Aided Software Testing.
Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.
CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
Coding: The generation of source code.
Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Component: A minimal software item for which a separate specification is available.
Component Testing: See Unit Testing.
Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Context Driven Testing: The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.
CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
Coding: The generation of source code.
Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Component: A minimal software item for which a separate specification is available.
Component Testing: See Unit Testing.
Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Context Driven Testing: The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - B (part 1)
B
Backus-Naur Form: A metalanguage used to formally describe the syntax of a language.
Basic Block: A sequence of one or more consecutive, executable statements containing no branches.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
Basis Set: The set of tests derived using basis path testing.
Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.
Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.
Beta Testing: Testing of a rerelease of a software product conducted by customers.
Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.
Boundary Value Analysis: BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.
Branch Testing: Testing in which all branches in the program source code are tested at least once.
Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Backus-Naur Form: A metalanguage used to formally describe the syntax of a language.
Basic Block: A sequence of one or more consecutive, executable statements containing no branches.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
Basis Set: The set of tests derived using basis path testing.
Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.
Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.
Beta Testing: Testing of a rerelease of a software product conducted by customers.
Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.
Boundary Value Analysis: BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.
Branch Testing: Testing in which all branches in the program source code are tested at least once.
Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Thursday, June 26, 2008
SOFTWARE PROJECT MANAGEMENT
SOFTWARE PROJECT MANAGEMENT
Software Project Management is all about managing Planning, Monitoring and Execution of Project Management In general there are four successive processes that bring a system into being
Requirement Gathering
Feasibility Study
Project Planning
Project Execution
a) Requirement Gathering
The requirements process is a full system life cycle set of activities that includes:
Understanding the customers' needs and expectations
Identifying and analyzing the requirements
Defining the requirements
Clarifying and restating the requirements
Prioritizing requirements
Partitioning requirements
Tracking requirements
Managing requirements
Testing and verifying requirements
Validating requirements
Requirements analysis and management needs additional attention as a key factor in the success of systems and software development projects.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Software Project Management is all about managing Planning, Monitoring and Execution of Project Management In general there are four successive processes that bring a system into being
Requirement Gathering
Feasibility Study
Project Planning
Project Execution
a) Requirement Gathering
The requirements process is a full system life cycle set of activities that includes:
Understanding the customers' needs and expectations
Identifying and analyzing the requirements
Defining the requirements
Clarifying and restating the requirements
Prioritizing requirements
Partitioning requirements
Tracking requirements
Managing requirements
Testing and verifying requirements
Validating requirements
Requirements analysis and management needs additional attention as a key factor in the success of systems and software development projects.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Recommended Requirements Gathering Practices
The following is a list of recommended requirements gathering practices. They are based on the author's extensive review of industry literature combined with the practical experiences of requirements analysts who have supported dozens of projects.
Understand a project vision and scope document.
Initiate a project glossary that provides definitions of words that are acceptable to and used by customers/users and the developers, and a list of acronyms to facilitate effective communication.
Evolve the real requirements via a "joint" customer/user and developer effort. Focus on product benefits (necessary requirements), not features. Address the minimum and highest priority requirements needed to meet real customer and user needs.
Document the rationale for each requirement (why it is needed).
Establish a mechanism to control changes to requirements and new requirements.
Prioritize the real requirements to determine those that should be met in the first release or product and those that can be addressed subsequently.
When the requirements are volatile (and perhaps even when they are not), consider an incremental development approach. This acknowledges that some of the requirements are "unknowable" until customers and users start using the system.
Use peer reviews and inspections of all requirements work products.
Use an industry-strength automated requirements tool.
Assign attributes to each requirement.
Provide traceability.
Maintain the history of each requirement.
Involve customers and users throughout the development effort.
Perform requirements validation and verification activities in the requirements gathering process to ensure that each requirement is testable.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Understand a project vision and scope document.
Initiate a project glossary that provides definitions of words that are acceptable to and used by customers/users and the developers, and a list of acronyms to facilitate effective communication.
Evolve the real requirements via a "joint" customer/user and developer effort. Focus on product benefits (necessary requirements), not features. Address the minimum and highest priority requirements needed to meet real customer and user needs.
Document the rationale for each requirement (why it is needed).
Establish a mechanism to control changes to requirements and new requirements.
Prioritize the real requirements to determine those that should be met in the first release or product and those that can be addressed subsequently.
When the requirements are volatile (and perhaps even when they are not), consider an incremental development approach. This acknowledges that some of the requirements are "unknowable" until customers and users start using the system.
Use peer reviews and inspections of all requirements work products.
Use an industry-strength automated requirements tool.
Assign attributes to each requirement.
Provide traceability.
Maintain the history of each requirement.
Involve customers and users throughout the development effort.
Perform requirements validation and verification activities in the requirements gathering process to ensure that each requirement is testable.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Quality testing dashboard
Quality testing dashboard
The tool would gather information as test cases were created, as problem reports were entered, and as test cases were executed. The data would automatically be gathered into a database and online up to the second, and reporting would be available at all times.
Because the test management system fosters a structured test process, it can provide several reports and processes that would otherwise require extensive manual data collection, organization, analysis, and reporting.
Throughout the lifecycle of a project, the test management system can provide relevant status reporting to facilitate planning, test execution, results tracking, and release decisions.
1. During test development, reports are available to determine what work has been completed and what tasks remain open.
2. During execution, the test management system tracks scripts that have been executed and those that have not, the result of the execution of each script, and the requirements coverage achieved and links to defects reported due to failed test cases, to provide a complete view of the release readiness.
Reports just based on defect tracking data show incomplete status; for example, a report that there are ten open defects does not tell much, unless we know how many test cases have been executed and how much requirements coverage is achieved by these test cases. We can use test management data to generate this missing information. Test case metrics complement defect reports metrics and give a better view of product quality.
Apart from this, other reports can be generated based on different attributes like type of test, modules, etc. Test management can provide objective, accurate, real-time information, which is just what is needed for deciding on the quality of a product. This is the most important benefit of having a structured testing process and tool. Based on test reports available, the product manager can make informed decisions about the quality of the application under development.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
The tool would gather information as test cases were created, as problem reports were entered, and as test cases were executed. The data would automatically be gathered into a database and online up to the second, and reporting would be available at all times.
Because the test management system fosters a structured test process, it can provide several reports and processes that would otherwise require extensive manual data collection, organization, analysis, and reporting.
Throughout the lifecycle of a project, the test management system can provide relevant status reporting to facilitate planning, test execution, results tracking, and release decisions.
1. During test development, reports are available to determine what work has been completed and what tasks remain open.
2. During execution, the test management system tracks scripts that have been executed and those that have not, the result of the execution of each script, and the requirements coverage achieved and links to defects reported due to failed test cases, to provide a complete view of the release readiness.
Reports just based on defect tracking data show incomplete status; for example, a report that there are ten open defects does not tell much, unless we know how many test cases have been executed and how much requirements coverage is achieved by these test cases. We can use test management data to generate this missing information. Test case metrics complement defect reports metrics and give a better view of product quality.
Apart from this, other reports can be generated based on different attributes like type of test, modules, etc. Test management can provide objective, accurate, real-time information, which is just what is needed for deciding on the quality of a product. This is the most important benefit of having a structured testing process and tool. Based on test reports available, the product manager can make informed decisions about the quality of the application under development.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Preferred Requirements Gathering Techniques - Risk Analysis 5
Risk Assessment for Projects
At least 50% of all projects (if not much more) are not successful in the sense that they do not achieve their objectives, or do not deliver the promised results, or sacrifice the predefined quality, or are not completed in the given time schedule, or use much more resources than originally planned.
There is a multitude of reasons for projects to fail. Projects often come on top of the usual work load and members of the project team belong to different departments, i.e. they have their first accountability to their line manager which often brings them into conflict with the project work. Team members have to work overtime if they want to complete their project tasks. At the end, project work is often sacrificed, and time budgets are often not sufficient.
What is mostly neglected: the occurrence of problems in project implementation increases with the complexity and length of the project.Larger and more complex projects that run over more than a year have other reasons of failure. Often these projects have permanent staff who are released from other tasks and work full time on the project, and well established budgets. However, those projects depend on a large number of external assumptions which influence their outcomes. It is impossible to clearly predict the future and the impact of various influence factors that are uncertain. Many project plans are too rigid to flexibly respond to changing needs.
Common to most projects is the lack of appropriate and transparent communication. Team members (and other stakeholders) often do not share a common understanding of the project's goals and strategies. It is important to unveil these misunderstandings and hidden agendas from the very beginning. The following tool, if applied in a project planning session helps to uncover issues that otherwise might remain undiscussed.
Explanations:
Business Level: Does the project have a strategic importance for the organization?
Length: How long is the intended implementation time?
Complexity: Does the project cover various business areas / objectives?
Technology: Is the technology to be applied well-established or is it a technology which yet has to be developed?
Number of organizational units involved: cross functional / geographical areas, etc.
Costs: estimated costs of the project
Overall risk of failure: How would you personally rank the risk that the project cannot achieve the objectives with the intended resources?
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
At least 50% of all projects (if not much more) are not successful in the sense that they do not achieve their objectives, or do not deliver the promised results, or sacrifice the predefined quality, or are not completed in the given time schedule, or use much more resources than originally planned.
There is a multitude of reasons for projects to fail. Projects often come on top of the usual work load and members of the project team belong to different departments, i.e. they have their first accountability to their line manager which often brings them into conflict with the project work. Team members have to work overtime if they want to complete their project tasks. At the end, project work is often sacrificed, and time budgets are often not sufficient.
What is mostly neglected: the occurrence of problems in project implementation increases with the complexity and length of the project.Larger and more complex projects that run over more than a year have other reasons of failure. Often these projects have permanent staff who are released from other tasks and work full time on the project, and well established budgets. However, those projects depend on a large number of external assumptions which influence their outcomes. It is impossible to clearly predict the future and the impact of various influence factors that are uncertain. Many project plans are too rigid to flexibly respond to changing needs.
Common to most projects is the lack of appropriate and transparent communication. Team members (and other stakeholders) often do not share a common understanding of the project's goals and strategies. It is important to unveil these misunderstandings and hidden agendas from the very beginning. The following tool, if applied in a project planning session helps to uncover issues that otherwise might remain undiscussed.
Explanations:
Business Level: Does the project have a strategic importance for the organization?
Length: How long is the intended implementation time?
Complexity: Does the project cover various business areas / objectives?
Technology: Is the technology to be applied well-established or is it a technology which yet has to be developed?
Number of organizational units involved: cross functional / geographical areas, etc.
Costs: estimated costs of the project
Overall risk of failure: How would you personally rank the risk that the project cannot achieve the objectives with the intended resources?
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Preferred Requirements Gathering Techniques - 5
Effort Estimation
Effort estimation consists in predict how many hours of work and how many workers are needed to develop a project. The effort invested in a software project is probably one of the most important and most analysed variables in recent years in the process of project management. The determination of the value of this variable when initiating software projects allows us to plan adequately any forthcoming activities. As far as estimation and prediction is concerned there is still a number of unsolved problems and errors. To obtain good results it is essential to take into consideration any previous projects. Estimating the effort with a high grade of reliability is a problem which has not yet been solved and even the project manager has to deal with it since the beginning.
Cost Estimation
It is the responsibility of the project manager to make accurate estimations of effort and cost. This is particularly true for projects subject to competitive bidding where a bid too high compared with competitors would result in loosing the contract or a bid too low could result in a loss to the organisation . This does not mean that internal projects are unimportant. From a project leaders estimate the management often decide whether to proceed with the project. Industry has a need for accurate estimates of effort and size at a very early stage in a project. However, when software cost estimates are done early in the software development process the estimate can be based on wrong or incomplete requirements. A software cost estimate process is the set of techniques and procedures that an organisation use to arrive at an estimate. An important aspect of software projects is to know the cost, The major contributing factor is effort.
Why SCE is difficult and error prone ?
Software cost estimation requires a significant amount of effort to perform it correctly.
SCE is often done hurriedly, without an appreciation for the effort required.
You need experience at developing estimates, especially for large projects.
Human bias i.e An Estimator is likely to consider how long a certain portion of the system would take, and then to merely extrapolate this estimate to the rest of the system, ignoring the non-linear aspects of software development.
The causes of poor and inaccurate estimation
imprecise and drifting requirements
new software projects are nearly always different form the last.
software practitioners don't collect enough information about past projects.
estimates are forced to match the resources available.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Effort estimation consists in predict how many hours of work and how many workers are needed to develop a project. The effort invested in a software project is probably one of the most important and most analysed variables in recent years in the process of project management. The determination of the value of this variable when initiating software projects allows us to plan adequately any forthcoming activities. As far as estimation and prediction is concerned there is still a number of unsolved problems and errors. To obtain good results it is essential to take into consideration any previous projects. Estimating the effort with a high grade of reliability is a problem which has not yet been solved and even the project manager has to deal with it since the beginning.
Cost Estimation
It is the responsibility of the project manager to make accurate estimations of effort and cost. This is particularly true for projects subject to competitive bidding where a bid too high compared with competitors would result in loosing the contract or a bid too low could result in a loss to the organisation . This does not mean that internal projects are unimportant. From a project leaders estimate the management often decide whether to proceed with the project. Industry has a need for accurate estimates of effort and size at a very early stage in a project. However, when software cost estimates are done early in the software development process the estimate can be based on wrong or incomplete requirements. A software cost estimate process is the set of techniques and procedures that an organisation use to arrive at an estimate. An important aspect of software projects is to know the cost, The major contributing factor is effort.
Why SCE is difficult and error prone ?
Software cost estimation requires a significant amount of effort to perform it correctly.
SCE is often done hurriedly, without an appreciation for the effort required.
You need experience at developing estimates, especially for large projects.
Human bias i.e An Estimator is likely to consider how long a certain portion of the system would take, and then to merely extrapolate this estimate to the rest of the system, ignoring the non-linear aspects of software development.
The causes of poor and inaccurate estimation
imprecise and drifting requirements
new software projects are nearly always different form the last.
software practitioners don't collect enough information about past projects.
estimates are forced to match the resources available.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Preferred Requirements Gathering Techniques - 4
Effort Estimation
Effort estimation consists in predict how many hours of work and how many workers are needed to develop a project. The effort invested in a software project is probably one of the most important and most analysed variables in recent years in the process of project management. The determination of the value of this variable when initiating software projects allows us to plan adequately any forthcoming activities. As far as estimation and prediction is concerned there is still a number of unsolved problems and errors. To obtain good results it is essential to take into consideration any previous projects. Estimating the effort with a high grade of reliability is a problem which has not yet been solved and even the project manager has to deal with it since the beginning.
Cost Estimation
It is the responsibility of the project manager to make accurate estimations of effort and cost. This is particularly true for projects subject to competitive bidding where a bid too high compared with competitors would result in loosing the contract or a bid too low could result in a loss to the organisation . This does not mean that internal projects are unimportant. From a project leaders estimate the management often decide whether to proceed with the project. Industry has a need for accurate estimates of effort and size at a very early stage in a project. However, when software cost estimates are done early in the software development process the estimate can be based on wrong or incomplete requirements. A software cost estimate process is the set of techniques and procedures that an organisation use to arrive at an estimate. An important aspect of software projects is to know the cost, The major contributing factor is effort.
Why SCE is difficult and error prone ?
Software cost estimation requires a significant amount of effort to perform it correctly.
SCE is often done hurriedly, without an appreciation for the effort required.
You need experience at developing estimates, especially for large projects.
Human bias i.e An Estimator is likely to consider how long a certain portion of the system would take, and then to merely extrapolate this estimate to the rest of the system, ignoring the non-linear aspects of software development.
The causes of poor and inaccurate estimation
imprecise and drifting requirements
new software projects are nearly always different form the last.
software practitioners don't collect enough information about past projects.
estimates are forced to match the resources available.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Effort estimation consists in predict how many hours of work and how many workers are needed to develop a project. The effort invested in a software project is probably one of the most important and most analysed variables in recent years in the process of project management. The determination of the value of this variable when initiating software projects allows us to plan adequately any forthcoming activities. As far as estimation and prediction is concerned there is still a number of unsolved problems and errors. To obtain good results it is essential to take into consideration any previous projects. Estimating the effort with a high grade of reliability is a problem which has not yet been solved and even the project manager has to deal with it since the beginning.
Cost Estimation
It is the responsibility of the project manager to make accurate estimations of effort and cost. This is particularly true for projects subject to competitive bidding where a bid too high compared with competitors would result in loosing the contract or a bid too low could result in a loss to the organisation . This does not mean that internal projects are unimportant. From a project leaders estimate the management often decide whether to proceed with the project. Industry has a need for accurate estimates of effort and size at a very early stage in a project. However, when software cost estimates are done early in the software development process the estimate can be based on wrong or incomplete requirements. A software cost estimate process is the set of techniques and procedures that an organisation use to arrive at an estimate. An important aspect of software projects is to know the cost, The major contributing factor is effort.
Why SCE is difficult and error prone ?
Software cost estimation requires a significant amount of effort to perform it correctly.
SCE is often done hurriedly, without an appreciation for the effort required.
You need experience at developing estimates, especially for large projects.
Human bias i.e An Estimator is likely to consider how long a certain portion of the system would take, and then to merely extrapolate this estimate to the rest of the system, ignoring the non-linear aspects of software development.
The causes of poor and inaccurate estimation
imprecise and drifting requirements
new software projects are nearly always different form the last.
software practitioners don't collect enough information about past projects.
estimates are forced to match the resources available.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Preferred Requirements Gathering Techniques - 3
Interfaces Analysis.
Missing or incorrect interfaces are often a major cause of cost overruns and product failures. Identifying external interfaces early clarifies product scope, aids risk assessment, reduces product development costs, and improves customer satisfaction. The steps of identifying, simplifying, controlling, documenting, communicating, and monitoring interfaces help to reduce the risk of problems related to interfaces.
Please see attached Requirement Analysis Template
b) The feasibility Study
The Feasibility Study uses technical information and cost data to determine the economic potential and practicality (i.e. feasibility) of a project. The Feasibility Study uses techniques that help evaluate a project and/or compare it with other projects. Factors such as interest rates, operating costs, and depreciation are generally considered. The following questions are answered during feasibility study
An abstract definition of problem
Formulation of different Solution strategy
Examination of alternative solution strategy (in terms of benefits, Resource requirement, costs etc)
Cost and benefit analysis to determine the best strategy
Who uses it?
Technical Architect, Business Analyst, Configuration Manager, Development Manager, Project Manager, IT Manager, System Administrator, Test Manager, Documentation Manager, Technical Writers, System Administrator.
When is it used?
The Feasibility Study analyses potential solutions against a set of requirements, evaluates their ability to meet these objectives, describe a recommended solution, and offer a justification for this selection.
c) Project Planning
When a project is estimated to be feasible, project planning is done. Project planning consist of the following steps
Effort, Cost, Resource and Project Duration planning
Risk Analysis and mitigation plan
Project Scheduling
Staffing organization and Staffing Plan
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Missing or incorrect interfaces are often a major cause of cost overruns and product failures. Identifying external interfaces early clarifies product scope, aids risk assessment, reduces product development costs, and improves customer satisfaction. The steps of identifying, simplifying, controlling, documenting, communicating, and monitoring interfaces help to reduce the risk of problems related to interfaces.
Please see attached Requirement Analysis Template
b) The feasibility Study
The Feasibility Study uses technical information and cost data to determine the economic potential and practicality (i.e. feasibility) of a project. The Feasibility Study uses techniques that help evaluate a project and/or compare it with other projects. Factors such as interest rates, operating costs, and depreciation are generally considered. The following questions are answered during feasibility study
An abstract definition of problem
Formulation of different Solution strategy
Examination of alternative solution strategy (in terms of benefits, Resource requirement, costs etc)
Cost and benefit analysis to determine the best strategy
Who uses it?
Technical Architect, Business Analyst, Configuration Manager, Development Manager, Project Manager, IT Manager, System Administrator, Test Manager, Documentation Manager, Technical Writers, System Administrator.
When is it used?
The Feasibility Study analyses potential solutions against a set of requirements, evaluates their ability to meet these objectives, describe a recommended solution, and offer a justification for this selection.
c) Project Planning
When a project is estimated to be feasible, project planning is done. Project planning consist of the following steps
Effort, Cost, Resource and Project Duration planning
Risk Analysis and mitigation plan
Project Scheduling
Staffing organization and Staffing Plan
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Preferred Requirements Gathering Techniques - 2
Prototyping.
Prototyping is a technique for building a quick and rough version of a desired system or parts of that system. The prototype illustrates the capabilities of the system to users and designers. It serves as a communications mechanism to allow reviewers to understand interactions with the system. Prototyping sometimes gives an impression that developers are further along than is actually the case, giving users an overly optimistic impression of completion possibilities. Prototypes can be combined effectively with other approaches such as JAD and models.
Use Cases.
A use case is a picture of actions a system performs, depicting the actors. It should be accompanied by a textual description and not be used in isolation of other requirements gathering techniques. Use cases should always be supplemented with quality attributes and other information such as interface characteristics. Many developers believe that use cases and scenarios (descriptions of sequences of events) facilitate team communication. They provide a context for the requirements by expressing sequences of events and a common language for end users and the technical team.
Be cautioned that use cases alone do not provide enough information to enable development activities. Other requirements elicitation techniques should also be used in conjunction with use cases. Use operational concepts as a simple, cost-effective way to build a consensus among stakeholders and to address two large classes of requirements errors: omitted requirements and conflicting requirements. Operational concepts identify user interface issues early, provide opportunities for early validation, and form a foundation for testing scenarios in product verification.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Prototyping is a technique for building a quick and rough version of a desired system or parts of that system. The prototype illustrates the capabilities of the system to users and designers. It serves as a communications mechanism to allow reviewers to understand interactions with the system. Prototyping sometimes gives an impression that developers are further along than is actually the case, giving users an overly optimistic impression of completion possibilities. Prototypes can be combined effectively with other approaches such as JAD and models.
Use Cases.
A use case is a picture of actions a system performs, depicting the actors. It should be accompanied by a textual description and not be used in isolation of other requirements gathering techniques. Use cases should always be supplemented with quality attributes and other information such as interface characteristics. Many developers believe that use cases and scenarios (descriptions of sequences of events) facilitate team communication. They provide a context for the requirements by expressing sequences of events and a common language for end users and the technical team.
Be cautioned that use cases alone do not provide enough information to enable development activities. Other requirements elicitation techniques should also be used in conjunction with use cases. Use operational concepts as a simple, cost-effective way to build a consensus among stakeholders and to address two large classes of requirements errors: omitted requirements and conflicting requirements. Operational concepts identify user interface issues early, provide opportunities for early validation, and form a foundation for testing scenarios in product verification.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Preferred Requirements Gathering Techniques - 1
Preferred Requirements Gathering Techniques
Following are a set of recommended requirements
elicitation techniques. These techniques can be
used in combination. Their advantages are that they
are effective in emerging the real requirements for
planned development efforts.
Interviews
Interviews are used to gather information. However,
the predisposition, experience, understanding, and
bias of the person being interviewed influence the
information obtained. The use of context-free
questions by the interviewer helps avoid
prejudicing the response . A context-free question
is a question that does not suggest a particular
response. For example, who is the client for this
system? What is the real reason for wanting to
solve this problem? What environment is this
product likely to encounter? What kind of product
precision is required?
Document Analysis
All effective requirements elicitation involves
some level of document analysis such as business
plans, market studies, contracts, requests for
proposals, statements of work, existing guidelines,
analyses of existing systems, and procedures.
Improved requirements coverage results from
identifying and consulting all likely sources of
requirements.
Brainstorming
Brainstorming involves both idea generation and
idea reduction. The goal of the former is to
identify as many ideas as possible, while the
latter ranks the ideas into those considered most
useful by the group. Brainstorming is a powerful
technique because the most creative or effective
ideas often result from combining seemingly
unrelated ideas. Also, this technique encourages
original thinking and unusual ideas.
Requirements Workshops.
Requirements workshops are a powerful technique for
eliciting requirements because they can be designed
to encourage consensus concerning the requirements
of a particular capability. They are best
facilitated by an outside expert and are typically
short (one or a few days). Other advantages are
often achieved -- participant commitment to the
work products and project success, teamwork,
resolution of political issues, and reaching
consensus on a host of topics. Benefits of
requirements workshops include the following:
Workshop costs are often lower than are those for
multiple interviews.
They help to give structure to the requirements
capture and analysis process.
They are dynamic, interactive, and cooperative.
They involve users and cut across organizational
boundaries.
They help to identify and prioritize needs and
resolve contentious issues.
When properly run, they help to manage user's
expectations and attitude toward change
A special category of requirements workshop is a
Joint Application Development (JAD) workshop. JAD
is a method for developing requirements through
which customers, user representatives, and
developers work together with a facilitator to
produce a requirements specification that both
sides support.
href="http://www.crestechsoftware.com/public_traini
ng.php/">Software Testing Training
Software testing
institute
href="http://www.crestechsoftware.com/corporate_tra
ining.php">corporate training software testing
For More Visit Site
href="http://www.crestechsoftware.com/">http://www.
crestechsoftware.com/
For discussion FORUM
href="http://www.crestechsoftware.com/forum">http:/
/www.crestechsoftware.com/forum
Following are a set of recommended requirements
elicitation techniques. These techniques can be
used in combination. Their advantages are that they
are effective in emerging the real requirements for
planned development efforts.
Interviews
Interviews are used to gather information. However,
the predisposition, experience, understanding, and
bias of the person being interviewed influence the
information obtained. The use of context-free
questions by the interviewer helps avoid
prejudicing the response . A context-free question
is a question that does not suggest a particular
response. For example, who is the client for this
system? What is the real reason for wanting to
solve this problem? What environment is this
product likely to encounter? What kind of product
precision is required?
Document Analysis
All effective requirements elicitation involves
some level of document analysis such as business
plans, market studies, contracts, requests for
proposals, statements of work, existing guidelines,
analyses of existing systems, and procedures.
Improved requirements coverage results from
identifying and consulting all likely sources of
requirements.
Brainstorming
Brainstorming involves both idea generation and
idea reduction. The goal of the former is to
identify as many ideas as possible, while the
latter ranks the ideas into those considered most
useful by the group. Brainstorming is a powerful
technique because the most creative or effective
ideas often result from combining seemingly
unrelated ideas. Also, this technique encourages
original thinking and unusual ideas.
Requirements Workshops.
Requirements workshops are a powerful technique for
eliciting requirements because they can be designed
to encourage consensus concerning the requirements
of a particular capability. They are best
facilitated by an outside expert and are typically
short (one or a few days). Other advantages are
often achieved -- participant commitment to the
work products and project success, teamwork,
resolution of political issues, and reaching
consensus on a host of topics. Benefits of
requirements workshops include the following:
Workshop costs are often lower than are those for
multiple interviews.
They help to give structure to the requirements
capture and analysis process.
They are dynamic, interactive, and cooperative.
They involve users and cut across organizational
boundaries.
They help to identify and prioritize needs and
resolve contentious issues.
When properly run, they help to manage user's
expectations and attitude toward change
A special category of requirements workshop is a
Joint Application Development (JAD) workshop. JAD
is a method for developing requirements through
which customers, user representatives, and
developers work together with a facilitator to
produce a requirements specification that both
sides support.
href="http://www.crestechsoftware.com/public_traini
ng.php/">Software Testing Training
Software testing
institute
href="http://www.crestechsoftware.com/corporate_tra
ining.php">corporate training software testing
For More Visit Site
href="http://www.crestechsoftware.com/">http://www.
crestechsoftware.com/
For discussion FORUM
href="http://www.crestechsoftware.com/forum">http:/
/www.crestechsoftware.com/forum
Glossary - A (part 1)
A
Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.
Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.
Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.
Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.
Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.
Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.
Automated Testing: Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing. The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site Crestech Software
For discussion FORUM Crestech Forum
Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.
Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.
Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.
Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.
Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.
Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.
Automated Testing: Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing. The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site Crestech Software
For discussion FORUM Crestech Forum
Thursday, June 19, 2008
ISTQB Foundation Certification
Session 1:
Dynamic Analysis
Test Tool
- The Basics of Software Testing: Error and bug terminology Testing terms The Psychology of testing General Principal of testing Test Planning and Control Test Analysis and design Test Implementation and execution Evaluation of Test Exit criterion Test closure activities
- General V Model Component testing Integration Testing System Testing Acceptance Test Generic type of testing
- Static Testing Examination of Test groups Roles and Responsibilities in a Test Group Reviews and Type of Reviews Static Analysis: Basics Static Analysis: Data Flow analysis Static Analysis: Control Flow analysis
Dynamic Analysis
- Black Box Testing Techniques Equivalence Class Partitioning Boundary value analysis State Transition testing Cause Effect graphing Use Case Testing
- White Box Analysis Statement coverage Branch Coverage Test of condition Path Coverage
Test Tool
- Type of test tools
- Tools for test management and control
- Tools for test specifications
- Tools for static and dynamic testing
- Tools for Non Functional tests
- Selection and Introduction of test tools
Session 6:
Test Management- Test Organization
- Test Planning
- Cost and Economy Aspect
- Definition of Test Strategy
- Test Activity Management
- Incident Management
- Requirements for Configuration management
- Crestech Software Systems
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
ISTQB Advanced Certification
Overview
Objective of the training would be train the participants on Test-Management Skills. It involves knowledge Testing Process, Test Policy and Handbook, Test Plan and Test Control, Test Deviation Management, Risk based Test Management, Test Metrics and Test Staff qualification skills. This course is also in-line with ISTQB Advance Level Test manager Certification helping you prepare for certification exam
Duration: 2 Days Intended Audience
Test and Development Process
The General V model
W-Model
Extreme Programming
Rapid Application Development
TestPolicy and Handbook
Quality Policy and Test Policy
Test Policy and Handbook
Test Plan
Testing Process Practices
Test Strategy Definition
Team Handling
Effective Estimation
Effective Metrics UtilizationTest Reports
Crestech Software Systems
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Objective of the training would be train the participants on Test-Management Skills. It involves knowledge Testing Process, Test Policy and Handbook, Test Plan and Test Control, Test Deviation Management, Risk based Test Management, Test Metrics and Test Staff qualification skills. This course is also in-line with ISTQB Advance Level Test manager Certification helping you prepare for certification exam
Duration: 2 Days Intended Audience
- Software Tester having basic Testing skills
- Test-Lead who need to Lead Testing Teams Technically
- Test Manager, who need to manage QA processes and Test-Teams
- Test Management: A Software Testing Perspective
- Test Process Fundamentals
- Test and Development Process
- Classification Of development Process
- Test Process Fundamentals
- Test Planning and Control
- Test Analysis and Design
- Test Tools
- Audit and Review Process
Test and Development Process
The General V model
W-Model
Extreme Programming
Rapid Application Development
TestPolicy and Handbook
Quality Policy and Test Policy
Test Policy and Handbook
Test Plan
- General Test Plan Structure
- The Level Test Plan
- IEEE 829 , Standard for Test documentation
- Definition Of Test Strategy
- Test Effort Estimation
- Organization Of Test Team and Test Levels
- Test Planning as an Iterative Process Accompanying Development
- Initiating the Test Tasks
- Monitoring the Test Progress
- Reacting to Test Result
- Reacting to Changed Circumstances
- Evaluating Test Completion
- General Techniques and Approaches
- Improving the Software Development Process
- Evaluating the Test Processes
- Audit and Assessments
- Incident handling
- Standardized Classification for Software Anomalies According to IEEE 1044
- Risk Identification
- Risk Analysis and Evaluation
- Risk Control and Treatment
- Risk Monitoring
- Risk Oriented Test Plan Creation and Test Prioritization
- Individual Skill
- Functional Team Roles
- Social Team Roles
- Test Execution Report
- Client Sign-Off Report
- Managing Execution Report
- Metrics Definition and Selection
- Presenting Measurement Values
- Types Of Test Metrics
- Residual Defect Estimation and Reliability
Testing Process Practices
Test Strategy Definition
Team Handling
Effective Estimation
Effective Metrics UtilizationTest Reports
Crestech Software Systems
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
CSTE Certification
KNOWLEDGE OF: THE TEST ENVIRONMENT
Knowledge Domain 1 : Test Principles and Concepts
KNOWLEDGE OF : TEST PLANNING
Knowledge Domain 5 : Risk Analysis
KNOWLEDGE OF : EXECUTING THE TEST PLAN
Knowledge Domain 7 : Test Design
KNOWLEDGE OF : TEST ANALYSIS AND REPORTING
Knowledge Domain 11 : Status of Testing
Crestech Software Systems
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Knowledge Domain 1 : Test Principles and Concepts
- Definition of Test Specifications
- Testing Techniques
- Testing Methods
- Independent Testing
- Commercial off the self Software
- Testing Code Development Under Outside Contract
- Test Quality
- Testing Life Cycle
- Vocabulary
- The Development and Acquisition Process
- Process Knowledge
- Roles/Responsibilities
- Quality Principles
- V Testing Concepts
- Testing During and After Development Acquisition
- Verification
- Validation
- Test Approaches
- Quality Attributes
- Test Management
- Giving Information
- Receiving Information
- Personal Effectiveness
- Continuing Professional Education
- Leadership
- Recognition
- Networking
- Code of Ethics
- Test Standard
- Test Environment Components
- Test Tools
- Quality Assurance/Quality Control
- Building the Test Environment Work Processes
- Adapting the Test Environment to Different Technologies
KNOWLEDGE OF : TEST PLANNING
Knowledge Domain 5 : Risk Analysis
- Risk Identification
- Managing Risks
- Pre-Planning Activities
- Test Planning
- Post Planning Activities
KNOWLEDGE OF : EXECUTING THE TEST PLAN
Knowledge Domain 7 : Test Design
- Design Preparation
- Design Execution
- Execute Tests
- Compare Actual versus Expected Results
- Test Log
- Record Discrepancies
- Defect Tracking
- Testing Defect Correction
- Concepts of Accepting Testing
- Roles and Responsibilities
- Acceptance Test Process
KNOWLEDGE OF : TEST ANALYSIS AND REPORTING
Knowledge Domain 11 : Status of Testing
- Test Completion Criteria
- Test Metrics
- Management by Fact
- Reporting Tools
- Test Report Standards
- Statistical Analysis
Crestech Software Systems
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
HP Quality Center (QC) Certification
Overview
Mercury Quality Center is a web-based test management tool that provides the methodology, structure, organization, and documentation for all phases of the application testing process. It Serves as a central repository for all your testing assets and provides a clear foundation for the entire testing process. It establishes seamless integration and smooth information flow from one stage of the testing process to the next. It supports the analysis of test data and coverage statistics, to provide a clear picture of an application’s accuracy and quality at each point in its lifecycle. Because it is completely web-enabled, it supports communication and collaboration among distributed testing teams.
Duration: 2.5 Days (20 Hours)
Course Objectives
This course teaches you to: Discuss the value of Test Management Understanding the Architecture of QC Understanding the Implementation of QC at different levels of Testing Life Cycle.
Prerequisites
Candidates should be well versed with the concepts of Manual Software Testing
Intended Audience
Quality assurance engineers, and new users of Quality Center who need to implement QC and/or executives who be will involved in any part of testing
Course Outline
Quality Center - Introduction
Release and Cycle creation
Test Requirements
Crestech Software Systems
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Mercury Quality Center is a web-based test management tool that provides the methodology, structure, organization, and documentation for all phases of the application testing process. It Serves as a central repository for all your testing assets and provides a clear foundation for the entire testing process. It establishes seamless integration and smooth information flow from one stage of the testing process to the next. It supports the analysis of test data and coverage statistics, to provide a clear picture of an application’s accuracy and quality at each point in its lifecycle. Because it is completely web-enabled, it supports communication and collaboration among distributed testing teams.
Duration: 2.5 Days (20 Hours)
Course Objectives
This course teaches you to: Discuss the value of Test Management Understanding the Architecture of QC Understanding the Implementation of QC at different levels of Testing Life Cycle.
Prerequisites
Candidates should be well versed with the concepts of Manual Software Testing
Intended Audience
Quality assurance engineers, and new users of Quality Center who need to implement QC and/or executives who be will involved in any part of testing
Course Outline
Quality Center - Introduction
- Need of Test-Management Tool
- Module (TestDirector Project, Site Administration, Customization)
- Domain/Project Fundamentals
- How to Get Started
- Creating Domain/project
- Adding users to project
- Creating Groups
Release and Cycle creation
Test Requirements
- Example of a test requirement
- Importance of tracing and tracking requirements
- Reviewing and building a
- requirements structure
- Entering requirements manually
- Review of an existing test case
- Parameters
- Building a test case structure
- Creating manual test cases
- Requirements coverage
- Creating folders and test sets
- Defining test execution flow
- Setting test set properties
- Manual test execution
- Logging defects during manual testing
- Automated test execution
- Adding test hosts
- Running a test set
- Setting run times
- Reporting defects
- Searching for similar defects
- Using grid filters
- Deleting defects
- Analysis menu graphs and reports
- Creating editable reports with the advanced Reporting
Crestech Software Systems
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
HP Quick Test Professional (QTP) Certification
Overview
QTP
Duration: 3 Days (24 Hours)
Course Objectives
This course teaches you to:
The concepts of functional automation. Getting abreast with the QTP and learning how to implement it to do effective test automation. Understanding the advanced level features of QTP along with doing hands on with them.
Prerequisites
Candidates should be well versed with the concepts of Manual Software Testing
Intended Audience
Quality assurance engineers, and new users of QTP who need to implement QTP and/or executives who will be involved in any part of testing.
Course Outline
Introduction to Automation
Architecture of Functional Automation Tools
Record and Play
Modes of Recording
Object Repository(Types)
Object Repository Manager(ORM) and Merging of OR
Object Identification
Actions
Parameterization
Checkpoints(Standard, Text, Bitmap, Database, XML from Resource)
Output Values(Standard, Text, Text Area, Bitmap, Database, XML from Resource)
Synchronization Points
Regular Expression
Recovery Scenarios
Function Libraries
Define VB Functions
VB subroutines
Accessing Data table at Runtime using VBScript
Concept of Descriptive programming.
Single physical description
Object of physical descriptions.
Framework
Types of Framework: 0,1,2,3.
Introduction to a Case Study.
Building Framework of a real life application.
Crestech Software Systems
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
QTP
Duration: 3 Days (24 Hours)
Course Objectives
This course teaches you to:
The concepts of functional automation. Getting abreast with the QTP and learning how to implement it to do effective test automation. Understanding the advanced level features of QTP along with doing hands on with them.
Prerequisites
Candidates should be well versed with the concepts of Manual Software Testing
Intended Audience
Quality assurance engineers, and new users of QTP who need to implement QTP and/or executives who will be involved in any part of testing.
Course Outline
Introduction to Automation
Architecture of Functional Automation Tools
Record and Play
Modes of Recording
Object Repository(Types)
Object Repository Manager(ORM) and Merging of OR
Object Identification
Actions
Parameterization
Checkpoints(Standard, Text, Bitmap, Database, XML from Resource)
Output Values(Standard, Text, Text Area, Bitmap, Database, XML from Resource)
Synchronization Points
Regular Expression
Recovery Scenarios
Function Libraries
Define VB Functions
VB subroutines
Accessing Data table at Runtime using VBScript
Concept of Descriptive programming.
Single physical description
Object of physical descriptions.
Framework
Types of Framework: 0,1,2,3.
Introduction to a Case Study.
Building Framework of a real life application.
Crestech Software Systems
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Friday, June 13, 2008
Glossary - R (part 1)
Race Condition: A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.
Ramp Testing: Continuously raising an input signal until the system breaks down.
Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Release Candidate: A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Ramp Testing: Continuously raising an input signal until the system breaks down.
Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Release Candidate: A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - Q (part 1)
Quality Assurance: All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.
Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.
Quality Circle: A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.
Quality Control: The operational techniques and the activities used to fulfill and verify requirements of quality.
Quality Management: That aspect of the overall management function that determines and implements the quality policy.
Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management.
Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.
Quality Circle: A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.
Quality Control: The operational techniques and the activities used to fulfill and verify requirements of quality.
Quality Management: That aspect of the overall management function that determines and implements the quality policy.
Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management.
Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - N (part 1)
Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.
N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors. See also Regression Testing.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors. See also Regression Testing.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Functional Test Cases
Functional Test Cases
Action > Result. If the action cannot be performed as described or the specified result does not occur, the test case fails. Organize the test cases within the categories specified below.>
Basic Feature Tests
Integration Tests
Negative Tests
Boundary Tests
Enhanced Feature Tests
Stress Tests
Performance Tests
Compatibility Tests
Deployment Tests
Internationalization Tests
Regression Tests
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Basic Feature Tests
Integration Tests
Negative Tests
Boundary Tests
Enhanced Feature Tests
Stress Tests
Performance Tests
Compatibility Tests
Deployment Tests
Internationalization Tests
Regression Tests
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - M (part 1)
Metric: A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.
Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - L (part 1)
Load Testing: See Performance Testing.
Localization Testing: This term refers to making software specifically designed for a specific locality.
Loop Testing: A white box testing technique that exercises program loops.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Localization Testing: This term refers to making software specifically designed for a specific locality.
Loop Testing: A white box testing technique that exercises program loops.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - I (part 1)
Independent Test Group (ITG): A group of people whose primary responsibility is software testing,
Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - H (part 1)
High Order Tests: Black-box tests conducted once the software has been integrated.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - G (part 1)
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particular module,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Gorilla Testing: Testing one particular module,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - F (part 1)
Functional Decomposition: A technique used during planning, analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.
Testing the features and operational behavior of a product to ensure they correspond to its specifications.
Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.
Testing the features and operational behavior of a product to ensure they correspond to its specifications.
Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - E (part 1)
Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.
End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Equivalence Class: A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - D (part 1)
Data Dictionary: A database that contains definitions of all data items defined during analysis.
Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.
Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
Debugging: The process of finding and removing the causes of software failures.
Defect: Nonconformance to requirements or functional / program specification
Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
Depth Testing: A test that exercises a feature of a product in full detail.
Dynamic Testing: Testing software through executing it. See also Static Testing.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.
Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
Debugging: The process of finding and removing the causes of software failures.
Defect: Nonconformance to requirements or functional / program specification
Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
Depth Testing: A test that exercises a feature of a product in full detail.
Dynamic Testing: Testing software through executing it. See also Static Testing.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - C (part 1)
CAST: Computer Aided Software Testing.
Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.
CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
Coding: The generation of source code.
Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Component: A minimal software item for which a separate specification is available.
Component Testing: See Unit Testing.
Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Context Driven Testing: The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.
CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
Coding: The generation of source code.
Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Component: A minimal software item for which a separate specification is available.
Component Testing: See Unit Testing.
Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Context Driven Testing: The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Glossary - B (part 1)
B
Backus-Naur Form: A metalanguage used to formally describe the syntax of a language.
Basic Block: A sequence of one or more consecutive, executable statements containing no branches.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
Basis Set: The set of tests derived using basis path testing.
Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.
Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.
Beta Testing: Testing of a rerelease of a software product conducted by customers.
Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.
Boundary Value Analysis: BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.
Branch Testing: Testing in which all branches in the program source code are tested at least once.
Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Backus-Naur Form: A metalanguage used to formally describe the syntax of a language.
Basic Block: A sequence of one or more consecutive, executable statements containing no branches.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
Basis Set: The set of tests derived using basis path testing.
Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.
Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.
Beta Testing: Testing of a rerelease of a software product conducted by customers.
Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.
Boundary Value Analysis: BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.
Branch Testing: Testing in which all branches in the program source code are tested at least once.
Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Tuesday, June 10, 2008
SOFTWARE PROJECT MANAGEMENT
SOFTWARE PROJECT MANAGEMENT
Software Project Management is all about managing Planning, Monitoring and Execution of Project Management In general there are four successive processes that bring a system into being
Requirement Gathering
Feasibility Study
Project Planning
Project Execution
a) Requirement Gathering
The requirements process is a full system life cycle set of activities that includes:
Understanding the customers' needs and expectations
Identifying and analyzing the requirements
Defining the requirements
Clarifying and restating the requirements
Prioritizing requirements
Partitioning requirements
Tracking requirements
Managing requirements
Testing and verifying requirements
Validating requirements
Requirements analysis and management needs additional attention as a key factor in the success of systems and software development projects.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Software Project Management is all about managing Planning, Monitoring and Execution of Project Management In general there are four successive processes that bring a system into being
Requirement Gathering
Feasibility Study
Project Planning
Project Execution
a) Requirement Gathering
The requirements process is a full system life cycle set of activities that includes:
Understanding the customers' needs and expectations
Identifying and analyzing the requirements
Defining the requirements
Clarifying and restating the requirements
Prioritizing requirements
Partitioning requirements
Tracking requirements
Managing requirements
Testing and verifying requirements
Validating requirements
Requirements analysis and management needs additional attention as a key factor in the success of systems and software development projects.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Recommended Requirements Gathering Practices
The following is a list of recommended requirements gathering practices. They are based on the author's extensive review of industry literature combined with the practical experiences of requirements analysts who have supported dozens of projects.
Understand a project vision and scope document.
Initiate a project glossary that provides definitions of words that are acceptable to and used by customers/users and the developers, and a list of acronyms to facilitate effective communication.
Evolve the real requirements via a "joint" customer/user and developer effort. Focus on product benefits (necessary requirements), not features. Address the minimum and highest priority requirements needed to meet real customer and user needs.
Document the rationale for each requirement (why it is needed).
Establish a mechanism to control changes to requirements and new requirements.
Prioritize the real requirements to determine those that should be met in the first release or product and those that can be addressed subsequently.
When the requirements are volatile (and perhaps even when they are not), consider an incremental development approach. This acknowledges that some of the requirements are "unknowable" until customers and users start using the system.
Use peer reviews and inspections of all requirements work products.
Use an industry-strength automated requirements tool.
Assign attributes to each requirement.
Provide traceability.
Maintain the history of each requirement.
Involve customers and users throughout the development effort.
Perform requirements validation and verification activities in the requirements gathering process to ensure that each requirement is testable.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Understand a project vision and scope document.
Initiate a project glossary that provides definitions of words that are acceptable to and used by customers/users and the developers, and a list of acronyms to facilitate effective communication.
Evolve the real requirements via a "joint" customer/user and developer effort. Focus on product benefits (necessary requirements), not features. Address the minimum and highest priority requirements needed to meet real customer and user needs.
Document the rationale for each requirement (why it is needed).
Establish a mechanism to control changes to requirements and new requirements.
Prioritize the real requirements to determine those that should be met in the first release or product and those that can be addressed subsequently.
When the requirements are volatile (and perhaps even when they are not), consider an incremental development approach. This acknowledges that some of the requirements are "unknowable" until customers and users start using the system.
Use peer reviews and inspections of all requirements work products.
Use an industry-strength automated requirements tool.
Assign attributes to each requirement.
Provide traceability.
Maintain the history of each requirement.
Involve customers and users throughout the development effort.
Perform requirements validation and verification activities in the requirements gathering process to ensure that each requirement is testable.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Quality testing dashboard
Quality testing dashboard
The tool would gather information as test cases were created, as problem reports were entered, and as test cases were executed. The data would automatically be gathered into a database and online up to the second, and reporting would be available at all times.
Because the test management system fosters a structured test process, it can provide several reports and processes that would otherwise require extensive manual data collection, organization, analysis, and reporting.
Throughout the lifecycle of a project, the test management system can provide relevant status reporting to facilitate planning, test execution, results tracking, and release decisions.
1. During test development, reports are available to determine what work has been completed and what tasks remain open.
2. During execution, the test management system tracks scripts that have been executed and those that have not, the result of the execution of each script, and the requirements coverage achieved and links to defects reported due to failed test cases, to provide a complete view of the release readiness.
Reports just based on defect tracking data show incomplete status; for example, a report that there are ten open defects does not tell much, unless we know how many test cases have been executed and how much requirements coverage is achieved by these test cases. We can use test management data to generate this missing information. Test case metrics complement defect reports metrics and give a better view of product quality.
Apart from this, other reports can be generated based on different attributes like type of test, modules, etc. Test management can provide objective, accurate, real-time information, which is just what is needed for deciding on the quality of a product. This is the most important benefit of having a structured testing process and tool. Based on test reports available, the product manager can make informed decisions about the quality of the application under development.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
The tool would gather information as test cases were created, as problem reports were entered, and as test cases were executed. The data would automatically be gathered into a database and online up to the second, and reporting would be available at all times.
Because the test management system fosters a structured test process, it can provide several reports and processes that would otherwise require extensive manual data collection, organization, analysis, and reporting.
Throughout the lifecycle of a project, the test management system can provide relevant status reporting to facilitate planning, test execution, results tracking, and release decisions.
1. During test development, reports are available to determine what work has been completed and what tasks remain open.
2. During execution, the test management system tracks scripts that have been executed and those that have not, the result of the execution of each script, and the requirements coverage achieved and links to defects reported due to failed test cases, to provide a complete view of the release readiness.
Reports just based on defect tracking data show incomplete status; for example, a report that there are ten open defects does not tell much, unless we know how many test cases have been executed and how much requirements coverage is achieved by these test cases. We can use test management data to generate this missing information. Test case metrics complement defect reports metrics and give a better view of product quality.
Apart from this, other reports can be generated based on different attributes like type of test, modules, etc. Test management can provide objective, accurate, real-time information, which is just what is needed for deciding on the quality of a product. This is the most important benefit of having a structured testing process and tool. Based on test reports available, the product manager can make informed decisions about the quality of the application under development.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Preferred Requirements Gathering Techniques - Risk Analysis 5
Risk Assessment for Projects
At least 50% of all projects (if not much more) are not successful in the sense that they do not achieve their objectives, or do not deliver the promised results, or sacrifice the predefined quality, or are not completed in the given time schedule, or use much more resources than originally planned.
There is a multitude of reasons for projects to fail. Projects often come on top of the usual work load and members of the project team belong to different departments, i.e. they have their first accountability to their line manager which often brings them into conflict with the project work. Team members have to work overtime if they want to complete their project tasks. At the end, project work is often sacrificed, and time budgets are often not sufficient.
What is mostly neglected: the occurrence of problems in project implementation increases with the complexity and length of the project.Larger and more complex projects that run over more than a year have other reasons of failure. Often these projects have permanent staff who are released from other tasks and work full time on the project, and well established budgets. However, those projects depend on a large number of external assumptions which influence their outcomes. It is impossible to clearly predict the future and the impact of various influence factors that are uncertain. Many project plans are too rigid to flexibly respond to changing needs.
Common to most projects is the lack of appropriate and transparent communication. Team members (and other stakeholders) often do not share a common understanding of the project's goals and strategies. It is important to unveil these misunderstandings and hidden agendas from the very beginning. The following tool, if applied in a project planning session helps to uncover issues that otherwise might remain undiscussed.
Explanations:
Business Level: Does the project have a strategic importance for the organization?
Length: How long is the intended implementation time?
Complexity: Does the project cover various business areas / objectives?
Technology: Is the technology to be applied well-established or is it a technology which yet has to be developed?
Number of organizational units involved: cross functional / geographical areas, etc.
Costs: estimated costs of the project
Overall risk of failure: How would you personally rank the risk that the project cannot achieve the objectives with the intended resources?
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
At least 50% of all projects (if not much more) are not successful in the sense that they do not achieve their objectives, or do not deliver the promised results, or sacrifice the predefined quality, or are not completed in the given time schedule, or use much more resources than originally planned.
There is a multitude of reasons for projects to fail. Projects often come on top of the usual work load and members of the project team belong to different departments, i.e. they have their first accountability to their line manager which often brings them into conflict with the project work. Team members have to work overtime if they want to complete their project tasks. At the end, project work is often sacrificed, and time budgets are often not sufficient.
What is mostly neglected: the occurrence of problems in project implementation increases with the complexity and length of the project.Larger and more complex projects that run over more than a year have other reasons of failure. Often these projects have permanent staff who are released from other tasks and work full time on the project, and well established budgets. However, those projects depend on a large number of external assumptions which influence their outcomes. It is impossible to clearly predict the future and the impact of various influence factors that are uncertain. Many project plans are too rigid to flexibly respond to changing needs.
Common to most projects is the lack of appropriate and transparent communication. Team members (and other stakeholders) often do not share a common understanding of the project's goals and strategies. It is important to unveil these misunderstandings and hidden agendas from the very beginning. The following tool, if applied in a project planning session helps to uncover issues that otherwise might remain undiscussed.
Explanations:
Business Level: Does the project have a strategic importance for the organization?
Length: How long is the intended implementation time?
Complexity: Does the project cover various business areas / objectives?
Technology: Is the technology to be applied well-established or is it a technology which yet has to be developed?
Number of organizational units involved: cross functional / geographical areas, etc.
Costs: estimated costs of the project
Overall risk of failure: How would you personally rank the risk that the project cannot achieve the objectives with the intended resources?
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Preferred Requirements Gathering Techniques - 5
Effort Estimation
Effort estimation consists in predict how many hours of work and how many workers are needed to develop a project. The effort invested in a software project is probably one of the most important and most analysed variables in recent years in the process of project management. The determination of the value of this variable when initiating software projects allows us to plan adequately any forthcoming activities. As far as estimation and prediction is concerned there is still a number of unsolved problems and errors. To obtain good results it is essential to take into consideration any previous projects. Estimating the effort with a high grade of reliability is a problem which has not yet been solved and even the project manager has to deal with it since the beginning.
Cost Estimation
It is the responsibility of the project manager to make accurate estimations of effort and cost. This is particularly true for projects subject to competitive bidding where a bid too high compared with competitors would result in loosing the contract or a bid too low could result in a loss to the organisation . This does not mean that internal projects are unimportant. From a project leaders estimate the management often decide whether to proceed with the project. Industry has a need for accurate estimates of effort and size at a very early stage in a project. However, when software cost estimates are done early in the software development process the estimate can be based on wrong or incomplete requirements. A software cost estimate process is the set of techniques and procedures that an organisation use to arrive at an estimate. An important aspect of software projects is to know the cost, The major contributing factor is effort.
Why SCE is difficult and error prone ?
Software cost estimation requires a significant amount of effort to perform it correctly.
SCE is often done hurriedly, without an appreciation for the effort required.
You need experience at developing estimates, especially for large projects.
Human bias i.e An Estimator is likely to consider how long a certain portion of the system would take, and then to merely extrapolate this estimate to the rest of the system, ignoring the non-linear aspects of software development.
The causes of poor and inaccurate estimation
imprecise and drifting requirements
new software projects are nearly always different form the last.
software practitioners don't collect enough information about past projects.
estimates are forced to match the resources available.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Effort estimation consists in predict how many hours of work and how many workers are needed to develop a project. The effort invested in a software project is probably one of the most important and most analysed variables in recent years in the process of project management. The determination of the value of this variable when initiating software projects allows us to plan adequately any forthcoming activities. As far as estimation and prediction is concerned there is still a number of unsolved problems and errors. To obtain good results it is essential to take into consideration any previous projects. Estimating the effort with a high grade of reliability is a problem which has not yet been solved and even the project manager has to deal with it since the beginning.
Cost Estimation
It is the responsibility of the project manager to make accurate estimations of effort and cost. This is particularly true for projects subject to competitive bidding where a bid too high compared with competitors would result in loosing the contract or a bid too low could result in a loss to the organisation . This does not mean that internal projects are unimportant. From a project leaders estimate the management often decide whether to proceed with the project. Industry has a need for accurate estimates of effort and size at a very early stage in a project. However, when software cost estimates are done early in the software development process the estimate can be based on wrong or incomplete requirements. A software cost estimate process is the set of techniques and procedures that an organisation use to arrive at an estimate. An important aspect of software projects is to know the cost, The major contributing factor is effort.
Why SCE is difficult and error prone ?
Software cost estimation requires a significant amount of effort to perform it correctly.
SCE is often done hurriedly, without an appreciation for the effort required.
You need experience at developing estimates, especially for large projects.
Human bias i.e An Estimator is likely to consider how long a certain portion of the system would take, and then to merely extrapolate this estimate to the rest of the system, ignoring the non-linear aspects of software development.
The causes of poor and inaccurate estimation
imprecise and drifting requirements
new software projects are nearly always different form the last.
software practitioners don't collect enough information about past projects.
estimates are forced to match the resources available.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Preferred Requirements Gathering Techniques - 4
Effort Estimation
Effort estimation consists in predict how many hours of work and how many workers are needed to develop a project. The effort invested in a software project is probably one of the most important and most analysed variables in recent years in the process of project management. The determination of the value of this variable when initiating software projects allows us to plan adequately any forthcoming activities. As far as estimation and prediction is concerned there is still a number of unsolved problems and errors. To obtain good results it is essential to take into consideration any previous projects. Estimating the effort with a high grade of reliability is a problem which has not yet been solved and even the project manager has to deal with it since the beginning.
Cost Estimation
It is the responsibility of the project manager to make accurate estimations of effort and cost. This is particularly true for projects subject to competitive bidding where a bid too high compared with competitors would result in loosing the contract or a bid too low could result in a loss to the organisation . This does not mean that internal projects are unimportant. From a project leaders estimate the management often decide whether to proceed with the project. Industry has a need for accurate estimates of effort and size at a very early stage in a project. However, when software cost estimates are done early in the software development process the estimate can be based on wrong or incomplete requirements. A software cost estimate process is the set of techniques and procedures that an organisation use to arrive at an estimate. An important aspect of software projects is to know the cost, The major contributing factor is effort.
Why SCE is difficult and error prone ?
Software cost estimation requires a significant amount of effort to perform it correctly.
SCE is often done hurriedly, without an appreciation for the effort required.
You need experience at developing estimates, especially for large projects.
Human bias i.e An Estimator is likely to consider how long a certain portion of the system would take, and then to merely extrapolate this estimate to the rest of the system, ignoring the non-linear aspects of software development.
The causes of poor and inaccurate estimation
imprecise and drifting requirements
new software projects are nearly always different form the last.
software practitioners don't collect enough information about past projects.
estimates are forced to match the resources available.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Effort estimation consists in predict how many hours of work and how many workers are needed to develop a project. The effort invested in a software project is probably one of the most important and most analysed variables in recent years in the process of project management. The determination of the value of this variable when initiating software projects allows us to plan adequately any forthcoming activities. As far as estimation and prediction is concerned there is still a number of unsolved problems and errors. To obtain good results it is essential to take into consideration any previous projects. Estimating the effort with a high grade of reliability is a problem which has not yet been solved and even the project manager has to deal with it since the beginning.
Cost Estimation
It is the responsibility of the project manager to make accurate estimations of effort and cost. This is particularly true for projects subject to competitive bidding where a bid too high compared with competitors would result in loosing the contract or a bid too low could result in a loss to the organisation . This does not mean that internal projects are unimportant. From a project leaders estimate the management often decide whether to proceed with the project. Industry has a need for accurate estimates of effort and size at a very early stage in a project. However, when software cost estimates are done early in the software development process the estimate can be based on wrong or incomplete requirements. A software cost estimate process is the set of techniques and procedures that an organisation use to arrive at an estimate. An important aspect of software projects is to know the cost, The major contributing factor is effort.
Why SCE is difficult and error prone ?
Software cost estimation requires a significant amount of effort to perform it correctly.
SCE is often done hurriedly, without an appreciation for the effort required.
You need experience at developing estimates, especially for large projects.
Human bias i.e An Estimator is likely to consider how long a certain portion of the system would take, and then to merely extrapolate this estimate to the rest of the system, ignoring the non-linear aspects of software development.
The causes of poor and inaccurate estimation
imprecise and drifting requirements
new software projects are nearly always different form the last.
software practitioners don't collect enough information about past projects.
estimates are forced to match the resources available.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Preferred Requirements Gathering Techniques - 3
Interfaces Analysis.
Missing or incorrect interfaces are often a major cause of cost overruns and product failures. Identifying external interfaces early clarifies product scope, aids risk assessment, reduces product development costs, and improves customer satisfaction. The steps of identifying, simplifying, controlling, documenting, communicating, and monitoring interfaces help to reduce the risk of problems related to interfaces.
Please see attached Requirement Analysis Template
b) The feasibility Study
The Feasibility Study uses technical information and cost data to determine the economic potential and practicality (i.e. feasibility) of a project. The Feasibility Study uses techniques that help evaluate a project and/or compare it with other projects. Factors such as interest rates, operating costs, and depreciation are generally considered. The following questions are answered during feasibility study
An abstract definition of problem
Formulation of different Solution strategy
Examination of alternative solution strategy (in terms of benefits, Resource requirement, costs etc)
Cost and benefit analysis to determine the best strategy
Who uses it?
Technical Architect, Business Analyst, Configuration Manager, Development Manager, Project Manager, IT Manager, System Administrator, Test Manager, Documentation Manager, Technical Writers, System Administrator.
When is it used?
The Feasibility Study analyses potential solutions against a set of requirements, evaluates their ability to meet these objectives, describe a recommended solution, and offer a justification for this selection.
c) Project Planning
When a project is estimated to be feasible, project planning is done. Project planning consist of the following steps
Effort, Cost, Resource and Project Duration planning
Risk Analysis and mitigation plan
Project Scheduling
Staffing organization and Staffing Plan
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Missing or incorrect interfaces are often a major cause of cost overruns and product failures. Identifying external interfaces early clarifies product scope, aids risk assessment, reduces product development costs, and improves customer satisfaction. The steps of identifying, simplifying, controlling, documenting, communicating, and monitoring interfaces help to reduce the risk of problems related to interfaces.
Please see attached Requirement Analysis Template
b) The feasibility Study
The Feasibility Study uses technical information and cost data to determine the economic potential and practicality (i.e. feasibility) of a project. The Feasibility Study uses techniques that help evaluate a project and/or compare it with other projects. Factors such as interest rates, operating costs, and depreciation are generally considered. The following questions are answered during feasibility study
An abstract definition of problem
Formulation of different Solution strategy
Examination of alternative solution strategy (in terms of benefits, Resource requirement, costs etc)
Cost and benefit analysis to determine the best strategy
Who uses it?
Technical Architect, Business Analyst, Configuration Manager, Development Manager, Project Manager, IT Manager, System Administrator, Test Manager, Documentation Manager, Technical Writers, System Administrator.
When is it used?
The Feasibility Study analyses potential solutions against a set of requirements, evaluates their ability to meet these objectives, describe a recommended solution, and offer a justification for this selection.
c) Project Planning
When a project is estimated to be feasible, project planning is done. Project planning consist of the following steps
Effort, Cost, Resource and Project Duration planning
Risk Analysis and mitigation plan
Project Scheduling
Staffing organization and Staffing Plan
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Preferred Requirements Gathering Techniques - 2
Prototyping.
Prototyping is a technique for building a quick and rough version of a desired system or parts of that system. The prototype illustrates the capabilities of the system to users and designers. It serves as a communications mechanism to allow reviewers to understand interactions with the system. Prototyping sometimes gives an impression that developers are further along than is actually the case, giving users an overly optimistic impression of completion possibilities. Prototypes can be combined effectively with other approaches such as JAD and models.
Use Cases.
A use case is a picture of actions a system performs, depicting the actors. It should be accompanied by a textual description and not be used in isolation of other requirements gathering techniques. Use cases should always be supplemented with quality attributes and other information such as interface characteristics. Many developers believe that use cases and scenarios (descriptions of sequences of events) facilitate team communication. They provide a context for the requirements by expressing sequences of events and a common language for end users and the technical team.
Be cautioned that use cases alone do not provide enough information to enable development activities. Other requirements elicitation techniques should also be used in conjunction with use cases. Use operational concepts as a simple, cost-effective way to build a consensus among stakeholders and to address two large classes of requirements errors: omitted requirements and conflicting requirements. Operational concepts identify user interface issues early, provide opportunities for early validation, and form a foundation for testing scenarios in product verification.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Prototyping is a technique for building a quick and rough version of a desired system or parts of that system. The prototype illustrates the capabilities of the system to users and designers. It serves as a communications mechanism to allow reviewers to understand interactions with the system. Prototyping sometimes gives an impression that developers are further along than is actually the case, giving users an overly optimistic impression of completion possibilities. Prototypes can be combined effectively with other approaches such as JAD and models.
Use Cases.
A use case is a picture of actions a system performs, depicting the actors. It should be accompanied by a textual description and not be used in isolation of other requirements gathering techniques. Use cases should always be supplemented with quality attributes and other information such as interface characteristics. Many developers believe that use cases and scenarios (descriptions of sequences of events) facilitate team communication. They provide a context for the requirements by expressing sequences of events and a common language for end users and the technical team.
Be cautioned that use cases alone do not provide enough information to enable development activities. Other requirements elicitation techniques should also be used in conjunction with use cases. Use operational concepts as a simple, cost-effective way to build a consensus among stakeholders and to address two large classes of requirements errors: omitted requirements and conflicting requirements. Operational concepts identify user interface issues early, provide opportunities for early validation, and form a foundation for testing scenarios in product verification.
Software Testing Training
Software testing institute
corporate training software testing
For More Visit Site
http://www.crestechsoftware.com/
For discussion FORUM
http://www.crestechsoftware.com/forum
Subscribe to:
Comments (Atom)