Increase in competition and leaps in technology have forced companies to adopt innovative approaches to assess themselves with respect to processes, products and services. This assessment helps them to improve their business so that they succeed and make more profits and acquire higher percentage of market.
Metric is the cornerstone in assessment and also foundation for any business improvement
Metric is a standard unit of measurement that quantifies results. Metric used for evaluating the software processes, products and services is termed as Software Metrics.
Definition of Software Metrics given by Paul Goodman: -
Software Metrics is a Measurement Based Technique which is applied to processes, products and services to supply engineering and management information and working on the information supplied to improve processes, products and services, if required.
3.0Importance of Metrics
Metrics is used to improve the quality and productivity of products and services thus achieving Customer Satisfaction.
Easy for management to digest one number and drill down, if required.
Different Metric(s) trend act as monitor when the process is going out-of-control.
Metrics provides improvement for current process.
4.0Point to remember
Metrics for which one can collect accurate and complete data must be used.
Metrics must be easy to explain and evaluate.
Benchmark for Metric(s) varies form organization to organization and also from person to person.
The process involved in setting up the metrics:
6.0Type of Software Testing Metrics
Based on the types of testing performed, following are the types of software testing metrics: -
Manual Testing Metrics
Performance Testing Metrics
Automation Testing Metrics
Following figure shows different software testing metrics.
Let’s have a look at each of them.
6.1Manual Testing Metrics
6.1.1Test Case Productivity (TCP)
This metric gives the test case writing productivity based on which one can have a conclusive remark.
Test Case Name
Total Raw Steps
Efforts took for writing 183 steps is 8 hours.
Test case productivity = 23 steps/hour
One can compare the Test case productivity value with the previous release(s) and draw the most effective conclusion from it.
TC Productivity Trend
6.1.2Test Execution Summary
This metric gives classification of the test cases with respect to status along with reason, if available, for various test cases. It gives the statical view of the release. One can collect the data for the number of test case executed with following status: -
Fail and reason for failure.
Unable to Test with reason. Some of the reasons for this status are time crunch, postponed defect, setup issue, out of scope.
One can also show the same trend for the classification of reasons for various unable to test and fail test cases.
6.1.3Defect Acceptance (DA)
This metric determine the number of valid defects that testing team has identified during execution.
The value of this metric can be compared with previous release for getting better picture
This metric determine the product quality based performance criteria on which one can take decision for releasing of the product to next phase i.e. it indicates quality of product under test with respect to performance.
If requirement is not met, one can assign the severity for the requirement so that decision can be taken for the product release with respect to performance.
Consider, Average response time is important requirement which has not met, then tester can open defect with Severity as Critical.
Then Performance Severity Index = (4 * 1) / 1 = 4 (Critical)
Performance Severity Trend
6.3Automation Testing Metrics
6.3.1Automation Scripting Productivity (ASP)
This metric gives the scripting productivity for automation test script based on which one can analyze and draw most effective conclusion from the same.
Where Operations performed is: -
No. of Click(s) i.e. click(s) on which data is refreshed.
No. of Input parameter
No. of Checkpoint added
Above process does include logic embedded into the script which is rarely used.
If Script is re-used the script development cost will be the script update cost.
Using this metric one can have an effective conclusion with respect to the currency which plays a vital role in IT industry.
6.4Common Metrics for all types of testing
6.4.1Effort Variance (EV)
This metric gives the variance in the estimated effort.
Effort Variance Trend
6.4.2Schedule Variance (SV)
This metric gives the variance in the estimated schedule i.e. number of days.
Schedule Variance Trend
6.4.3Scope Change (SC)
This metric indicates how stable the scope of testing is.
Total Scope = Previous Scope + New Scope, if Scope increases
Total Scope = Previous Scope - New Scope, if Scope decreases
Scope Change Trend for one release
Metric is the cornerstone in assessment and foundation for any business improvement. It is a Measurement Based Technique which is applied to processes, products and services to supply engineering and management information and working on the information supplied to improve processes, products and services, if required. It indicates level of Customer satisfaction, easy for management to digest number and drill down, whenever required and act as monitor when the process is going out-of-control.
Following table summarize the Software testing metrics discussed in this paper:
Manual Testing Metrics
Test Case Productivity
Provides the information for the number of step(s) written per hour.
Test Execution Summary
Provides statical view of execution for the release along with status and reason.
Indicates the stability and reliability of the application.
Provides the percentage of invalid defects.
Bad Fix Defect
Indicates the effectiveness of the defect-resolution process
Test Execution Productivity
Provides detail of the test case executed per day.
Indicates the testing capability of the tester in identifying the defect.
A Cognizant India Test Engineer with over 3.5 years of proven Quality and Test management experience in the Financial Services sector. Consistently developed and implemented new ideas and techniques which have lead to dramatic Quality improvement within the projects. Published two whitepapers: Customer Satisfaction through Quality Index and Sanity Testing which are available at: -
To reach email at: - Lokesh.firstname.lastname@example.org