Some scientists (particularly scientists involved in biological sciences) talk of “positive controls” (other scientists may call these a “reference” or a “standard”) and “negative controls”. The terms don’t make a lot of sense, until you understand what they mean and then it’s quite easy.
Examples from everyday life.
Positive controls. Have you ever bought a new car? Did you have a test drive first to get an idea of how the car performs? The test drive tells you the standard that you can expect. When you get your new car, it might not be the actual car you took on a test drive, but it should be the same model and so perform similarly. Now suppose you take delivery of your new car, and it doesn’t match up to the car you took on a test drive. Maybe it doesn’t accelerate as well, or some accessories are missing. You could reasonably go back to the showroom, point out the deficiencies and get your new car repaired, replaced or maybe even ask for your money back.
The test drive was your “positive control” – it set the standard, it showed you what should happen. If you hadn’t taken the test drive, you might not have realised that your new car was defective. That’s why positive controls are so useful – they tell you what to expect if things go well.
Negative controls. A negative control is the opposite of a positive control. It tells you what should happen if your experimental intervention does nothing. Suppose you have heard that adding grated beetroot to chocolate cake mix makes it tastes even better. So you head to the kitchen and bake a chocolate cake with beetroot in it and it tastes great! But, wait! How do you know it’s any better than your normal chocolate cake? The only way to test this is to bake a chocolate cake using your normal recipe – instead of adding beetroot you just use the regular ingredients. This is your “negative control” – it sets the standard if you do nothing to alter the recipe. So now you can compare the beetroot-enhanced cake with the normal one and see whether there really is a difference.
For scientists, positive controls are very helpful because it allows us to be sure that our experimental set-up is working properly. For example, suppose we want to test how well a new drug works and we have designed a laboratory test to do this. We test the drug and it works, but has it worked as well as well as it should? The only way to be sure is to compare it to another drug (the positive control) which we know works well. The positive control drug is also useful because it tells us our experimental equipment is working properly. If the new drug doesn’t work, we can rule out a problem with our equipment by showing that the positive control drug works.
The “negative-control” sets what we sometimes call the “baseline”. Suppose we are testing a new drug to kill bacteria (an antibiotic) and to do this we are going to count the number of bacteria that are still alive in a test tube after we add the drug. We could set up an experiment with three tubes.
- One tube could contain the drug we want to test.
- The second tube would contain our positive control (a different drug which we know will kill the bacteria)
- The last tube is our negative control – it contains a drug which we know has no effect on the bacteria. This tells us how many bacteria would be alive if we didn’t kill any of them.
If the new drug is working, there should be fewer cells left alive in the first tube compared to the last tube and ideally then number of cells still alive (if any) should be the same in the first and second tube.
So “controls” are important to scientists because it helps us validate the performance of our experimental set-up and tells us what effects we can reasonably expect to observe.