NOTE: This article was originally written in 2001. I still feel that the tester to developer ratio is of limited usefulness, but after you read this article, I have more recent research which you can read here.
Plus, I have an on-demand webinar recording here:
One of the most frequently asked questions I get is worded roughly:
"Can you tell me what the industry standard tester to developer ratio is?"
My answer is "No, because there is no way to get an accurate measurement of such a ratio, especially across the IT industry where some organizations perform no independent testing at all."
However, I dislike simply dismissing the question because I think there is a more important underlying question of:
"Can you please tell me a quick and easy way to determine the correct number of testers based on the number of developers?"
Once again, the answer is "No, because there is more to determining the number of testers than simply basing it on the number of developers."
There are more accurate and reliable ways to tell how many testers you need for a particular project than just applying a ratio. In my experience, basing workforce plans on ratios can be dangerous because by the time you learn that you don't have enough people, it is often to late to bring additional people to the testing effort. Or, conversely, you may realize during testing that you have more people than you need, which adds additional expense to the project. I haven't seen the latter scenario occur nearly as much as I have the former.
The Problem With Ratios as a Benchmark
Let it be clearly understood that I don't completely discount the use of ratios in planning if they are your ratios, based in your experience, your technology and your organizational structure. What I do see as a risk is when an organization takes another organization's ratios and applies them to their project without regard to differences in technology, process maturity, and skill levels.
A Case Study
For example, Company A is planning a test for a new system that is developed in-house. They have just developed a new testing process and have hired five new testers for the project. All of the testers have less than two years experience in testing and less than five years experience with the company. Company A has 15 developers using visual development tools. Management has directed that each module be independently tested. In planning the staffing levels, Company A checked with a variety of sources and concluded that a workable tester to developer ratio would be 1:3. After the test had been underway for about 3 days, it became apparent that the testers would need to work longer days to keep up with the output from developers.
One of the companies that Company A contacted to get a benchmark was Company B. Company B has had a testing process in place for three years and has also invested in an automated test tool to handle the basic tests. Also, the developers in Company B perform a high level of testing before the software is released for independent testing. Company B currently has a tester to developer ratio of 1:3, however three years ago, they had one tester for every two developers.
In comparing the characteristics of the two companies, it is easy to see how one company's experience might not apply to another organization. Why, then, is there such an interest in finding "the" ratio of testers to developers? My theory is that people believe that finding a ratio could:
- Provide a sanity check for current staffing levels
- Justify future staffing increases
- Provide a quick way to estimate workload and staffing.
However, there are at least two key problems with the search for "the" ratio of testers to developers.
Problem #1 - There is not a one-size-fits-all solution.
As we saw in the example, each organization has its own mix of people, processes, and tools. Even if there was an industry standard, it would have to be a generic standard, which at best would be a starting point for staffing estimates.
Problem #2 - The "industry standard" sample size is small compared to the actual number of organizations performing testing.
Even with sample sizes of 1,000 organizations or more, the average ratio could be misleading. Much depends on the industry category, geographic location and accuracy of the information being reported. As we learned in Year 2000 efforts, there simply are not enough companies that measure what they do to get an across the board sampling. Since the more mature IT organizations are the ones that measure projects, the measurements are naturally skewed toward mature organizations.
In the case of tester to developer ratios, the difficulty is not in measuring, but rather in gathering the data and relating it in a way that yields meaningful information.
Problem #3 - A tester to developer ratio seems to assume that all work products must be independently tested. Although it sounds like good practice to independently test all work, actually this often places a bottleneck in the overall software delivery process. There may be cases, such as low-risk modules that can be tested and certified by the development team.
Some Research and What it Means
In an effort to develop some findings concerning the tester to developer ratio, I took an informal survey at QAI's 20th Annual Software Testing Conference in September of 2000. Here are my findings:
- There were 29 respondents
- The minimum ratio was 0 testers to 1 developer
- The maximum ratio was 1 tester to 30 developers
- The most common ratio was 1 tester to 3 developers
- The average ratio was 1 tester to 7 developers
- The median ratio was 1 tester to 5 developers
The ratio of 1 tester to 3 developers was the most common ratio reported. The majority of responses were at 1:7 or below.
Observations and Analysis
This was not a scientific survey and the sample size was very small. In addition, there is no way to judge factors such as process, industry, people and tools. Realizing these limitations, the results showed that many organizations fall at the lower end of the ratio range. There is a commonly used informal benchmark of 1 tester to 3 developers which has been discussed for awhile. These findings seem to reflect that 1:3 is a popular ratio. An interesting correlation would be to relate the tester to developer ratio with defect removal efficiency percentages, or with Capability Maturity Model (CMM) levels.
A Better Way
One of my frequent observations about the ways testing efforts are estimated is that too often the estimate is made before the scope of testing is determined. This observation applies not only to people-hours, but to overall time windows and human resources.
Determine the Scope More Accurately
One reason the testing effort is often estimated incorrectly is that the estimate is not based on measurable items, such as test cases, testable requirements, or testable transactions. I sometimes refer to this as the "two, four, six, eight rule", where the time allocated to testing is two, four, six or eight weeks, depending on what sounds good on the day the project plan is finalized.
The single best thing that could be done in estimating the testing effort is to base it on a defined scope.
If requirements are defined early in the project, there is a basis for determining the scope of testing. At the minimum, test objectives should relate to project objectives. At the time the project schedule and statement of work are defined, there should be enough information to estimate testing activities if functions are defined at a quantifiable level. Testable requirements are a way to estimate tests at a detailed level. However, care must be taken to estimate all phases and types of testing required for testing a particular requirement, plus the time to plan and evaluate the test, plus the time to repeat the test several times if necessary.
Determine the Test Team Size Based on Management Capability
I often ask people, "What is the tester to developer ratio indicates you need 50 testers? Could you manage them? Could you get the funding to hire them? Could you even find 50 testers to hire? If not, could you train 50 testers?"
There is a practical limit at which you must look at time, cost and people and make a reasoned judgment to achieve a workable balance.
To this end, I often advise people to:
- Think about how many people they can effectively manage,
- Look at the scope of testing to see how much work will need to be performed,
- Assess the testing process to see if too much work is being shifted to testers,
- Modify the testing process, if necessary,
- Investigate the role of automated testing in their organization,
- Consider the phase and type of testing to be performed. Some types of testing, such as usability testing can often be performed by small groups of people.
After analyzing the above items, a reasoned judgment can be made to staff a team that is the right size to get the job done - no matter how many developers you have.
Prioritize the Testing Effort
Another way to balance the testing workload is to prioritize the modules or areas of the system to be tested. It is possible to quantify testing based on risk, which is a function of the likelihood of failure and the impact of failure. Not only can you prioritize the testing in terms of the order of testing, but you can also adjust the types and extent of testing by risk.
The fact is, even if you had all the time in the world, you still would not be able to test your software completely. Therefore, a line must be drawn somewhere to get the most work possible into the planned time frames.
Perform Contingency Planning
The project manager and test manager should work together to plan early in the project what can give if the deadline arrives and the product is not ready to release, either due to defects or incomplete testing. There are basically four areas that can be manipulated to meet an overall implementation goal. These areas are:
- Scope - perhaps some of the lesser-used functionality can be implemented in a later release.
- Schedule - perhaps the implementation deadline can be extended.
- Cost - Perhaps additional people can be added to the project to get more work done in the planned timeframes. However, there is a danger of violating Brook's Law: "Adding more people to a project that is already late will make it even later."
- Quality - perhaps certain types of testing could be minimized. However, this is also a risky choice.
Conclusion
At Rice Consulting Services, we will continue to research the question of tester to developer ratios to learn if there really is a magic number, although to date we see many risks in just applying a ratio. At best, the most commonly mentioned ratio of one tester to three developers can be used as a starting point for your staffing estimates. I advise people to base staffing estimates in their own history, processes, tools and skill levels and then use industry ratios (if you can find and trust them) as a validation.
If you would like to contribute to this research with your own tester to developer ratio, just go to the research section on this web site. I also welcome hearing about your experience in staffing the testing effort in your company. You can e-mail me from the contact page.
Bio
Randall Rice is a leading author, speaker and consultant in the field of software testing and software quality. Randy has 30 years experience building and testing mission-critical projects in a variety of environments and is co-author of the book, Surviving the Top Ten Challenges of Software Testing. He is the author and instructor of Testing SOA and Structured User Acceptance Testing courses, presented by Rice Consulting Services.