What is User Acceptance Testing?

  • Print
  • E-mail

ideas about UATOver the past few years, there have been a variety of definitions applied to User Acceptance Testing (UAT). Your success in validating that a system or application is "fit for use" by the intended user depends on how you define this phase of testing.

For example, if you see UAT as a functional test based solely on user requirements, you will likely miss the same things in testing that were missed in defining the requirements. Another example is that if you see UAT as the tests that can be automated in agile testing, you may miss the "hands on" assessment of the actual user to determine how the application actually meets their needs.

I need to be clear that I am not saying that you must use my definition of UAT or else you will be hopelessly doomed to project failure. What I am saying is that there are a variety views of UAT that may or may not meet your needs - and that you had better be sure you know the differences in the ways UAT is defined.

Who Should Perform UAT?

I'm a radical, so I suggest the actual current and/or future users do the planning, testing and evaluation for acceptance. There are those who see differently. Some people prefer to have testers take the role of users. Others have a UAT team composed of former users who now do nothing but testing. In other organizations UAT might be performed by business analysts.

I like to have actual users perform UAT because 1) they are going to be using the system eventually anyway, 2) they know the current ways of doing their jobs and therefore can tell when something won't work for them, and 3) they should know what they will be getting in terms of system features and quality.

This is not without challenges. Here are some of the reasons that are typically given for not involving users in UAT:

1) Not enough time due to performing normal job duties
2) No training on the new system to be tested
3) No interest
4) Not enough testing knowledge or experience

These are all significant challenges, but can typically be managed.

What is "Acceptance?"

In reality, contracts have been signed and money has been spent, so "acceptance" is typically not an "accept or reject" proposition. UAT is more about finding gaps between how the system works and how operational processes are performed.

A Word About Validation

It seems to me that the distinction between verification and validation has been lost in recent years. It's important that we understand the difference between these two types of testing so that we can get a complete and accurate evaluation of what we are testing.

I'll refer to the ISTQB glossary at this point, which references ISO 9000:

"verification: Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. [ISO 9000]"

"validation: Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. [ISO 9000]"

In IEEE 1012-1998 for Software Verification and Validation, the identical definitions are given as referenced in ISO 9000, but there is some additional notation:


"1 - In design and development, validation concerns the process of examining a product to determine conformity with user needs."
"2 - Validation is normally performed on the final product under defined operating conditions. It may be necessary in earlier stages."


"1 - In design and development, verification concerns the process of examining the result of a given activity to determine conformity with the stated requirement for that activity."

So, allow me to paraphrase a bit. Verification determines if something has been built according to specifications. Validation determines if something works as intended in the user's environment and meets their needs.

Those are two greatly differing activities!

UAT is generally considered to be validation. In fact, it is typically the only time validation is performed in a project. System testing, integration testing, unit testing, as well as reviews are all examples of verification because they are based on specifications and requirements. Therefore, it is very important that in the one validation opportunity we have, we get it right.

Now let's look at some of the differing views of UAT.

The Beta Test

In this definition, software is given (or sold) to users for them to try as they do their normal activities. Some beta testers go beyond that and actually try to break the software.

The problem with beta testing is that you never know how much testing was actually done. Even worse, you never know in advance how much testing will be done. If you're relying on people to beta test your software, you're probably going to miss a lot of things.

Beta testing does serve a useful purpose in finding configuration problems that might not otherwise not be found in your own testing. It also provides the opportunity to get feedback about the product early. However, this still doesn't really meet the bar of validation because beta testing does not imply acceptance and it often lacks the rigor of a controlled test.

Agile Acceptance Testing
user acceptance testing

In agile development, acceptance testing is the functional testing that is based on a user's stated needs. Functional tests are designed based on those needs. Some of the functional test are automated, while others are performed manually. In agile methods, the developer may be the one actually performing these tests. This means that the user may or may not be observing the results of the test.

In the absence of defined requirements (at least to the extent as they are seem traditionally) these acceptance tests are close to the functional tests that would be considered system testing in other development approaches. It's good that these tests are being done, but these are still more verification than validation.

By the way, the other major forms of testing seen in agile are test driven development (TDD) and exploratory testing. Both of these types of testing focus on finding defects as opposed to validating fitness for use.

The Traditional View

There is also much confusion in the traditional view of acceptance testing. Some take UAT to mean that the system or application is tested by users to confirm that documented requirements have been met. Others, such as myself, see UAT as a real-world validation of the system or validation performed by users.

The difference is what the test is based upon for the purpose of the analysis of results. The problem with basing UAT on requirements is that 1) many times we don't have well defined requirements on projects, and 2) even when we do have well defined requirements they can have defects in them.

At this point, many people ask, "Then, what do you base your test upon?"

Actually, there are many ways to test without defined requirements. You can refer to my article, Testing Without Defined Requirements to see a further listing of those methods.

For the purposes of UAT, a very effective way to design tests is to base them on user processes. These can be workflows or other process driven activities that people will use the software to accomplish. Not only is this a different basis for testing, but it is a process driven view that is typically not achievable based on requirements. In fact, this may be the only opportunity to perform any type of business process validation. One of the most critical concerns in any system deployment is that the system will work immediately after deployment to support what the users do. Business process validation helps assure that business processes are supported by the application or system.

I also combine process-driven test design with data-driven test design to achieve a test that models the real-world. To illustrate this, I use an analogy of plumbing. Without water flowing through the pipes, all you can tell is whether or not the pipes are in the right place. When water is flowing through the pipes, then you can detect leaks, determine if the pressure is adequate, the color and odor of the water, the correctness of hot and cold, etc. The pipes represent processes, the water represents the data that flows through the processes and the faucets represent the controls in a system.

I'm a big believer in requirements-based testing. I just believe that the users need a second, independent perspective of testing to compensate for the errors and gaps that are typically seen in requirements -- even when they have been reviewed.

In any event, UAT should be based on a defined set of acceptance criteria that are defined at the project inception.

Unfortunately, UAT is often performed at the worst possible time in a project -- at the very end. If you wait until this point to find where the system fails to meet user needs, there is a huge risk that the project will be late or not accepted at all. To mitigate this risk, I advise involving users in reviews and testing that precede UAT.

In my experience, there really isn't much value in a surprise factor of testing. In other words, it's fine for users to see what's coming their way long before they have to test it.

Manual vs. Automated UAT

Traditional UAT is also performed manually. I prefer this approach because 1) UAT is typically performed just once which means there is little return on investment for test automation, and 2) users need to be seeing what the software actually looks like and how it performs. Automation takes this perspective away from people.

There is perhaps a role for automation in UAT. There may be mundane testing that can be easily automated. However, the user needs to first understand how the application is performing those functions. Also, users may be called upon to perform some level of performance and regression testing, but it is the rare instance that this can be performed without assistance from people who know how to do this type of testing. So I consider these to be the exceptions instead of the rule.

There is also the possibility of automating repeated UAT. This is sometimes seen when multiple releases are delivered. A user may need to test new functions in addition to all the functions previously tested in a prior release, such as in a regression test. This is typically seen in the agile world as well as any type of iterative development approach.

So, there can be an opportunity for automating some acceptance tests. The main concern is that users get to experience first-hand how the system supports their needs.


Regardless of which perspective of UAT is performed on your projects, keep in mind that the need for actual users to be testing the software in their own world, using their own processes, is not only helpful, but also needful.

Regardless of your methodology or development approach, there is a critical need to both verify that the system meets requirements and to validate that the system meets user needs. Always keep in mind that defined requirements may not reflect actual user needs. Therefore, you need both verification and validation.

I hope this brief distinction of some major forms of UAT helps define your own view of UAT in a way that will add value to your testing efforts.

Last Updated on Tuesday, 18 June 2013 20:16



Free Updates in Your E-mail Inbox

Stay up to date when new content is added and when special discounts are offered! We never sell or release your infomation to any other organization.
Your Name
Your E-mail Address


ASTQB Conference

March  24 - 25, 2014

San Francisco, CA

Tutorial - Free and Cheap Tools

Track - Defect Sampling

StarEast Conference

May 4 - 9, 2014

Orlando, FL

Tutorial - Testing Mobile Applications

Keynote - Principles Before Practice

Better Software Conference West

June 1 - 6, 2014

Tutorial - Testing Mobile Applications 

Randy's Newest Book

Frustrated and confused by trying to test large, complex and undocumented legacy systems? Read Randy's newest book! Click on the cover to buy it.

Buy the Book!

Randy's book, Surviving the Top Ten Challenges of Software Testing, will help you solve some of your toughest testing problems: people problems!

Now in Kindle format!

Click on the image to buy it from Amazon.com.

GSA Schedule 70 holder software testing ISTQB

Who's Online

We have 28 guests online

Contact Info

Rice Consulting Services, Inc.
P.O. Box 892003
Oklahoma City, OK 73189