“The generation of random numbers is too important to be left to chance.”—Robert R Coveyou
The randomized controlled trial (RCT) has become the standard by which studies of therapy are judged. The key to the RCT lies in the random allocation process. When done correctly in a large enough sample, random allocation is an effective measure in reducing bias. In this article we describe the random allocation process.
What makes up the random allocation process?
The random allocation process consists of two steps:
generating an unpredictable random sequence,
implementing the sequence in a way that conceals the treatments until patients have been formally assigned to their groups.
What are acceptable ways of generating a random sequence?
Simple random allocation is the easiest and most basic approach that provides unpredictability of treatment assignment. In simple random allocation, treatment assignment is made by chance without regard to prior allocation (that is, it bears no relation to past allocations and it is not discoverable ahead of time).
Good methods of generating a random allocation sequence include using a random-numbers table or a computer software program that generates the random sequence. There are manual methods of achieving random allocation such as tossing a coin, drawing lots or throwing dice. However, these manual methods in practice often become nonrandom, are difficult to implement and do not leave an audit trail. Therefore, they are not generally recommended. Procedures to avoid completely include using hospital chart numbers, alternating patients sequentially or assigning by date of birth.
Because simple random allocation has no relationship with prior assignment, unequal group sizes can happen by chance, especially in small sample sizes. To illustrate this point, 20 different random allocation sequences were generated for two treatments that had a total sample size of 20 patients. Here are the results of the number of patients randomly assigned to each of two treatment groups (A or B) (Table 1):
As you can see, an imbalance of six patients or more between groups occurred seven times (35% of the time). In trial number five, the difference was twelve! However, this concern about group imbalance diminishes as the sample size gets bigger. In general for a two-arm trial, the probability of a significant imbalance is negligible with a sample size of 200 or more 1. Alternatively, there are procedures other than simple random allocation that can be used to ensure balanced group sizes, such as blocking, the random allocation rule, and replacement randomization 1.
What is allocation concealment?
Allocation concealment is the technique of ensuring that implementation of the random allocation sequence occurs without knowledge of which patient will receive which treatment, as knowledge of the next assignment could influence whether a patient is included or excluded based on perceived prognosis.
For example, suppose that a spine surgeon has been working on a new kind of bone substitute that from a series of patients has shown great promise. The surgeon believes using this new substitute is better than the current method and wants to demonstrate this advantage in a randomized controlled trial. Let’s also assume the random sequence has been generated, the new bone substitute is the next treatment to be given, and the surgeon knows that this treatment is next on the list. The next patient seen by the surgeon has comorbidities that make the surgeon believe that this patient is risky of achieving success with any treatment even though the patient meets the inclusion/exclusion criteria for the study. In this scenario one might easily subconsciously justify not enrolling the patient. Perhaps the patient hesitates briefly when the study is mentioned and the surgeon suggests that the patient sleep on the idea of participating. Maybe the surgeon decides to get more tests before offering enrollment. The number of different subtle possibilities to exclude this patient is only limited by one’s imagination.
What is the result when concealment is not ensured?
One can expect a biased estimate of the treatment effect, and is in some cases as much as 40% or larger 2.
What are acceptable ways to ensure concealment?
The following are considered adequate approaches to concealed allocation:
Central randomization. In this technique the individual recruiting the patient contacts a central methods center by phone or secure computer after the patient is enrolled.
Sequentially numbered, opaque, sealed envelopes. This method is generally considered acceptable, but may be susceptible to manipulation 3. If investigators use envelopes, it is suggested that the envelopes receive numbers in advance, and are opened sequentially, only after the participant’s name is written on the appropriate envelope. In addition, the use of pressure sensitive paper inside the envelope should be used to transfer information to the assigned allocation. This can then serve as a valuable audit trail 4.
Did the random allocation work?
Researchers should always present the distributions of baseline characteristics by treatment group in a table (often the first table). This allows the reader to compare the groups at baseline on the distribution of important prognostic characteristics and allows surgeons to infer results to specific populations 5. The reader should look for the magnitude of the differences between groups (if any are present) to see if those differences should be accounted for in the analysis.
The use of P-values to determine if differences in baseline characteristics are important is not appropriate in randomized trials 4,6. Remember, the P-value is not a measure of the size of the effect, but is the probability that any differences are due to chance. In a trial with proper randomly generated and concealed allocation any differences at baseline are due to chance.
The key phrase in an RCT is “random allocation” and it must be done properly, using two steps:
generating the random sequence,
implementing the sequence in a way that it is concealed.
One should consider using a random numbers table or computer program to generate the random allocation sequence.
To minimize the effect of bias, the random allocation sequence should remain concealed from those enrolling patients into the study.
1. Lachin J M. Statistical properties of randomization in clinical trials. Control Clin Trials. 1988;9(4):289–311.[PubMed]
2. Moher D, Pham B, Jones A. et al. Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet. 1998;352(9128):609–613.[PubMed]
3. Bhandari M, Guyatt G H, Swiontkowski M F. User’s guide to the orthopaedic literature: how to use an article about a surgical therapy. J Bone Joint Surg Am. 2001;83-A(6):916–926.[PubMed]
4. Schulz K F, Grimes D A. Allocation concealment in randomised trials: defending against deciphering. Lancet. 2002;359(9306):614–618.[PubMed]
5. Altman D G, Schulz K F, Moher D. The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med. 2001;134(8):663–694.[PubMed]
6. Altman D G, Doré C J. Randomisation and baseline comparisons in clinical trials. Lancet. 1990;335(8682):149–153.[PubMed]
Articles from Evidence-Based Spine-Care Journal are provided here courtesy of Thieme Medical Publishers
Random assignment or random placement is an experimental technique for assigning human participants or animal subjects to different groups in an experiment (e.g., a treatment group versus a control group) using randomization, such as by a chance procedure (e.g., flipping a coin) or a random number generator. This ensures that each participant or subject has an equal chance of being placed in any group. Random assignment of participants helps to ensure that any differences between and within the groups are not systematic at the outset of the experiment. Thus, any differences between groups recorded at the end of the experiment can be more confidently attributed to the experimental procedures or treatment.
Random assignment, blinding, and controlling are key aspects of the design of experiments, because they help ensure that the results are not spurious or deceptive via confounding. This is why randomized controlled trials are vital in clinical research, especially ones that can be double-blinded and placebo-controlled.
Mathematically, there are distinctions between randomization, pseudorandomization, and quasirandomization, as well as between random number generators and pseudorandom number generators. How much these differences matter in experiments (such as clinical trials) is a matter of trial design and statistical rigor, which affect evidence grading. Studies done with pseudo- or quasirandomization are usually given nearly the same weight as those with true randomization but are viewed with a bit more caution.
Benefits of random assignment
Imagine an experiment in which the participants are not randomly assigned; perhaps the first 10 people to arrive are assigned to the Experimental Group, and the last 10 people to arrive are assigned to the Control group. At the end of the experiment, the experimenter finds differences between the Experimental group and the Control group, and claims these differences are a result of the experimental procedure. However, they also may be due to some other preexisting attribute of the participants, e.g. people who arrive early versus people who arrive late.
Imagine the experimenter instead uses a coin flip to randomly assign participants. If the coin lands heads-up, the participant is assigned to the Experimental Group. If the coin lands tails-up, the participant is assigned to the Control Group. At the end of the experiment, the experimenter finds differences between the Experimental group and the Control group. Because each participant had an equal chance of being placed in any group, it is unlikely the differences could be attributable to some other preexisting attribute of the participant, e.g. those who arrived on time versus late.
Random assignment does not guarantee that the groups are matched or equivalent. The groups may still differ on some preexisting attribute due to chance. The use of random assignment cannot eliminate this possibility, but it greatly reduces it.
To express this same idea statistically - If a randomly assigned group is compared to the mean it may be discovered that they differ, even though they were assigned from the same group. If a test of statistical significance is applied to randomly assigned groups to test the difference between sample means against the null hypothesis that they are equal to the same population mean (i.e., population mean of differences = 0), given the probability distribution, the null hypothesis will sometimes be "rejected," that is, deemed not plausible. That is, the groups will be sufficiently different on the variable tested to conclude statistically that they did not come from the same population, even though, procedurally, they were assigned from the same total group. For example, using random assignment may create an assignment to groups that has 20 blue-eyed people and 5 brown-eyed people in one group. This is a rare event under random assignment, but it could happen, and when it does it might add some doubt to the causal agent in the experimental hypothesis.
Random sampling is a related, but distinct process. Random sampling is recruiting participants in a way that they represent a larger population. Because most basic statistical tests require the hypothesis of an independent randomly sampled population, random assignment is the desired assignment method because it provides control for all attributes of the members of the samples—in contrast to matching on only one or more variables—and provides the mathematical basis for estimating the likelihood of group equivalence for characteristics one is interested in, both for pretreatment checks on equivalence and the evaluation of post treatment results using inferential statistics. More advanced statistical modeling can be used to adapt the inference to the sampling method.
Randomization was emphasized in the theory of statistical inference of Charles S. Peirce in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883). Peirce applied randomization in the Peirce-Jastrow experiment on weight perception.
Charles S. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights. Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the eighteen-hundreds.
Jerzy Neyman advocated randomization in survey sampling (1934) and in experiments (1923).Ronald A. Fisher advocated randomization in his book on experimental design (1935).
- ^ abhttp://www.socialresearchmethods.net/kb/random.php.
- ^ abCharles Sanders Peirce and Joseph Jastrow (1885). "On Small Differences in Sensation". Memoirs of the National Academy of Sciences. 3: 73–83.
- ^ abIan Hacking (September 1988). "Telepathy: Origins of Randomization in Experimental Design". Isis (A Special Issue on Artifact and Experiment). 79 (3): 427–451. doi:10.1086/354775.
- ^ abStephen M. Stigler (November 1992). "A Historical View of Statistical Concepts in Psychology and Educational Research". American Journal of Education. 101 (1): 60–70. doi:10.1086/444032.
- ^ abTrudy Dehue (December 1997). "Deception, Efficiency, and Random Groups: Psychology and the Gradual Origination of the Random Group Design". Isis. 88 (4): 653–673. doi:10.1086/383850. PMID 9519574.
- ^Neyman, Jerzy (1990) , Dabrowska, Dorota M.; Speed, Terence P., eds., "On the application of probability theory to agricultural experiments: Essay on principles (Section 9)", Statistical Science (Translated from (1923) Polish ed.), 5 (4): 465–472, doi:10.1214/ss/1177012031, MR 1092986
- Caliński, Tadeusz & Kageyama, Sanpei (2000). Block designs: A Randomization approach, Volume I: Analysis. Lecture Notes in Statistics. 150. New York: Springer-Verlag. ISBN 0-387-98578-6.
- Hinkelmann, Klaus and Kempthorne, Oscar (2008). Design and Analysis of Experiments. I and II (Second ed.). Wiley. ISBN 978-0-470-38551-7.
- Charles S. Peirce, "Illustrations of the Logic of Science" (1877–1878)
- Charles S. Peirce, "A Theory of Probable Inference" (1883)
- Charles Sanders Peirce and Joseph Jastrow (1885). "On Small Differences in Sensation". Memoirs of the National Academy of Sciences. 3: 73–83. http://psychclassics.yorku.ca/Peirce/small-diffs.htm
- Hacking, Ian (September 1988). "Telepathy: Origins of Randomization in Experimental Design". Isis. 79 (3): 427–451. doi:10.1086/354775. JSTOR 234674. MR 1013489.
- Stephen M. Stigler (November 1992). "A Historical View of Statistical Concepts in Psychology and Educational Research". American Journal of Education. 101 (1): 60–70. doi:10.1086/444032.
- Trudy Dehue (December 1997). "Deception, Efficiency, and Random Groups: Psychology and the Gradual Origination of the Random Group Design". Isis. 88 (4): 653–673. doi:10.1086/383850. PMID 9519574.
- Basic Psychology by Gleitman, Fridlund, and Reisberg.
- "What statistical testing is, and what it is not," Journal of Experimental Education, 1993, vol 61, pp. 293–316 by Shaver.