Thanks to everyone who provided data for my latest research project on tester to developer ratios. This topic has been an interest of mine for over ten years, when I did my first surveys on tester to developer ratios. The results of that survey and my thoughts at the time are in an article called, The Elusive Tester to Developer Ratio.

This short article is to document early findings. I plan to continue surveys and data gathering, so if you did not get in on the first round of surveying, I would like to hear about your ratios. You can contact me here.

The participants of the recent survey were subscribers to my newsletter, The Software Quality Advisor, and the audience at my StarWest tutorial on Becoming an Influential Test Team Leader. There were 53 respondents in all, mostly from North America, but six were from Europe and one was from Asia.
webinar

I asked four questions:

1) How many developers are in your organization?
2) How many testers are in your organization?
3) On a scale of 1 to 6, where 1 is poor and 6 is super, how would you rate the effectiveness of your current ratio?
4) Do you have any anecdotal information about how your current ratio effectiveness?

  • The leanest ratio was twenty developers to one tester (effectiveness rating of “two”), while the richest ratio was fifteen developers to eighteen testers (effectiveness rating of “four”).
  • There was one anomalous response of four developers to zero testers (The effectiveness rating on that one was “three”).
  • The average ratio was 4.52 developers to one tester.
  • The most common response was three developer to one tester (six responses), the next most common was 2.5 developers per tester (five responses).
  • There were twenty-six responses with developer to tester ratios of 3:1 or lower.

Here are some of my initial observations and comments:

1) The responses varied greatly.

For those looking for an “industry norm” of developer to tester ratios, this may show that the range of workable ratios is wide. Effective testing may be achieved by better practices, tools and leveraging developer-based testing instead of having more testers.

2) Over half of the responses were at the “richer” ratios.

The average effectiveness reported by this group (3:1 or less) was four – above average. Interestingly, the average effectiveness for the higher ratios was three – average, and not a huge difference from the lower ratio group.

3) In the higher ratio group, there were some with higher than average test effectiveness of four or five.

This tells me that you have a higher ratio and still be effective at software testing. Put another way, the magic of good testing may not be in the ratio of developers to testers.

I have always questioned the idea of using developer to tester ratio as a way to staff or estimate testing efforts. Sheer body count is just not enough information to base testing effort upon.

That said, I think developer to tester ratios may be a helpful metric to understand the workload in a test organization. For example, if I were presented with a situation where the developer to tester ratio is ten to one, I would ask:

  • Are any test automation tools being used? If so, how effective are they?
  • How much responsibility do developers have in the testing process?
  • Is testing based on risk?
  • Are test optimization techniques used in test design?
  • What is the defect detection percentage (DDP)?
  • Are defect trends tracked and studied?
  • Have the developers and testers been trained in software testing?
  • Is there a defined testing process in place and being used?

These questions would help determine how the balance and effectiveness of the testing process. Before making team sizing decisions on numbers of people, it may actually be better to use the developer to tester ratio as a metric to guide the testing process.

I did this on my first job as a test manager. I had a team of three people testing the work of thirty developers. The ten to one ratio told me that we could not test all the work coming our way.

We had no tools, just our wits. So, we developed a strategy:

1) Get management to lead the way to make the message to developers that testing is part of their job
2) Train and mentor each developer to be a good tester
3) Test the high risk changes at the highest priority
4) Test anything a developer asked us to test (unless there was no documentation)
5) Do not test anything without a defined user requirement
6) Use cause/effect graphing and other test optimization techniques to get the most testing from the fewest tests
7) Build a robust and repeatable test data set for manual regression testing

The result was that 1) we kept up with the workload and 2) the error rate went from 50% of changes with defects to 2%. At this point, we still had a ten to one developer to tester ratio. This may work for you, too.

 

I hope this information helps you understand your own ratio a little better. If you would like to contribute your own ratio to my data, just reply to me here with the four items:

1) How many developers are in your organization?
2) How many testers are in your organization?
3) On a scale of 1 to 6, where 1 is poor and 6 is super, how would you rate the effectiveness of your current ratio?
4) Do you have any anecdotal information about how your current ratio effectiveness?

Thanks!