applicant screening, admissions test, admissions assessment, admissions screening


Admissions assessment is usually associated with the process of examining or testing candidates before they are admitted to a course or program. There are entire fields of study which evaluate and analyze how students are tested and whether improvements can be made to fundamental aspect of formal instruction. However, the principles that underlie admissions assessment are as important when applied to the very process of selecting those to be admitted to the course, program, or school. Universally, for competitive programs there are many more applicants than places available. Furthermore, once selected most entrants graduate. As a result, competition for entry to these fields is high. This takes into account the already high initial barrier to application. A candidate contemplating an application to a competitive program often needs the appropriate undergraduate degree to be eligible or competitive. This prerequisite must be completed with a competitive Grade Point Average and usually includes some form of entrance exam (MCAT, LSAT, GMAT, GRE etc.). To be considered, or have a reasonable chance of acceptance the applicant also requires additional experience such as volunteering, shadowing, research, and favorable references from each. The stakes for the individual applicant and the institution are high. The selection process therefore must be fair, credible, valid, and publicly defensible.

>> Grab Our Free Ultimate Guide to Admissions Screening <<

Here we will discuss the current state of the FIVE common admissions assessment measures that science has proven to be ineffective:

Admissions assessment fail #1: GPA and Admissions Standardized Tests


Aptitude and achievement tests are almost ubiquitously used in the selection process for professional schools. The most well-known and therefore best studied test is the North American Medical College Admission Test (MCAT). The academic literature shows some positive predictive value of this test for future performance, specifically on first year in course performance. However, studies such as those by Koenig et al. and Tekian while reporting on the positive predictive validity of the test conclude that MCAT was not a perfect predictor and variables such as ‘diligence’, ‘motivation’ and ‘communication skills’ need further investigation. In a study of two Scottish medical schools Lynch et al. found that UKCAT (the equivalent of MCAT in the UK) did not predict performance in the first year of medical school. While medical school entrance exams have been more extensively studied yielding mixed results, there are simply not enough studies as yet to make conclusions about the other tests or the relative merits of so-called achievement or aptitude tests.

An inherent flaw in most of these assessments is that the predictive value of these tests is judged by the relationship between the selection test and in course assessment. Essentially, this is an examination of “tests predicting tests”. At the level of the individual this may tell us whether a student is an adept test taker but will yield no insight into how this person will perform as a future physician, lawyer, dentist, or researcher. Determining whether a selection test can predict an applicant’s future practice as a professional requires more sophisticated methods of measurement and choice of outcome variables. Importantly, the administrations of such tests themselves are costly, adding additional financial barriers to applicants.

The use of GPA as a screening tool has some caveats. There are surprisingly few studies on the utility of GPA as a predictor of future success. In a study from one school from the Netherlands, students with higher GPAs were associated with faster graduation, greater success in achieving preferred specialist training and greater scientific output (Cohen-Schotanus et al.). However, in a study of Universities and Colleges Admission Scores (UCAS) in the United Kingdom, (Powis et al.) found that higher scores were associated with being younger and male and related to ethnic origin and type of school. Similar to the advantages one can obtain with professional help in preparing for an entry exam, the above study points to an uncomfortable reality. Having financial means is a significant advantage at all steps of the education process. This can affect school choices from an early age, such as private vs. public, highly rated districts, specialized/magnet schools, after school clubs and activities and eventually preparation for the entrance exam. Importantly, more affluent applicants, have inherently more time to dedicate to studying for both their admissions test and courses because they are not required to take on part-time employment to pay the bills.

While these types of achievement examinations may have some predictive value, that predictive value is limited to performance on other tests. They have barriers to entry, costs of preparation and are thus inherently biased. Importantly, studies have shown that GPA and aptitude scores beyond a certain point do not provide any additional advantage and selecting those with the highest scores further interferes with the ability of admissions to select best suited applicants. 

Ultimately it is critical to keep in mind that the use of GPA and standardized testing is an out-dated practice introduced during the industrial revolution and their roots can be traced back to the 1800s starting with the work of Adolphe Quetelet, which was then continued by Sir Francis Galton, Frederick Winslow Taylor, and solidified by Edward Thorndike. Schools at the time were primarily concerned with training factory workers and introduced school bells to mimic factory bells and condition children for their future jobs. Thorndike’s work gave rise to the introduction of GPA and standardized testing for sorting students, which he believed could predict future success, but that we now know is not necessarily the case. Their use is quickly falling out of favor leading Harvard Professor Todd Rose to coin the term “averagarians”, which he defines as anyone “who uses averages to understand individuals”. For a more detailed explanation, we highly recommend his book, “The End of Average”

Admissions assessment fail #2: The personal statement


The personal statement or any form of short essay is another common selection tool for many schools and programs. However, few if any studies exist that demonstrate its effectiveness as a selection tool. In a recent assessment of the selection for the health care professions the authors found no evidence that they are reliable or have any predictive validity (Preseux et al.). Another review of the personal qualities in selection by Albanese et al. the authors found no evidence that the personal statement measured anything different than the interview. Wouters et al. found that one cannot distinguish between selected and non-selected applicants on the basis of written statements on motivation. The unreliability of personal or autobiographical statements is not difficult to reason. Applicants have almost unlimited time in which to craft their statement. It is extremely difficult to verify the veracity of the personal history statements within. There are countless guides, workshops, classes on how to compose the ‘ideal’ statement resulting in a homogeneity of work. Personal history and background information betrays the anonymity of the process (if one is indeed imposed) and the applicant can then become vulnerable to the implicit biases of the audience. Personal and autobiographical statements are no more useful than as perfunctory tasks and may be harmful to the selection process. Combine this with a time and resources required to go over thousands of personal statements and you have a completely failed system of applicant selection. 

Admissions assessment fail #3: Computer-based Situational Judgment Tests

The use of web-based situational judgement tests (SJTs) are an attempt towards a more scientifically rational approach to candidate selection. A handful of private companies are offering such services in North America, Europe and Australia in an attempt to bring a truly outdated technology, which has been used by businesses for decades, as a novel “solution”. For example, in one such test, candidates are shown hypothetical real-life situation in a video or written prompt and asked how they would respond. The test is claimed to examine non-cognitive abilities like problem solving, decision-making and interpersonal skills. It uses multiple raters and developed scenarios and rating scales. In essence, the reviewers judge whether an applicant responds “appropriately” in a given situation. While these tests purport to be more reliable as a selection tool they too have significant shortcomings. Asking hypothetical questions, generally leads the applicants to provide socially acceptable responses and those with higher socioeconomic status normally do better on such tests as a result. Judging whether a response to a usually very delicate or stressful imagined scenario is appropriate can vary across cultures. These tests are singularly guided by accepted western cultural norms. This can pose significant challenges to non-native applicants or those who are immersed in another culture. The diverse cultural make up of the United States, the United Kingdom and Canada, for example, means that this is not an insignificant issue for applicants. For example, in a recent medical education conference in Canada, the New York Medical College (NYMC) reported that underrepresented minority applicants scored lower on such a computer-based situational judgment test, compared to other applicants, and that males scored lower than females creating a gender bias. Related to this is a caveat present in all tests that claim to test non-cognitive skills, the applicant provides the answer they think the reviewers want to hear. Because these tests claim to be less vulnerable to subjective opinions of the interviewer means that an applicant can be more efficiently coached on the correct answer. There is a commonly held opinion that applicants who come from more advantaged socioeconomic backgrounds may have stronger cognitive and non-cognitive skills than those from lower socioeconomic strata because of their economically-biased childhood experiences. While this is not always the case, some believe that such tests may discriminate against socio-economically disadvantaged applicants that lacked the opportunity to develop or refine their non-cognitive skills as a result of their lesser social, economic, and cultural capital. 

Moreover, such tests add yet another level of barrier to lower income applicants because such tests add another cost to the costly application process and tuition for those selected. 

Admissions assessment fail #4: Interviews


The interview, face-to-face contact with single interviewer or a panel with varying degrees of structure, is a common part of selection processes. The interview step of most applications follows an initial narrowing of the applicant pool using grades, aptitude scores, personal statements, and/or SJTs. This means that not all students will have an opportunity to meet with a reviewer in person. Here the errors made with achievement tests are amplified. Interviews are costly and difficult logistical operations for an institution requiring managing the time of the applicants, interviewers, use of appropriate locations to list just a few. These investments may be justified if the interview was a reliable method for selecting applicants. Unfortunately, studies that analyze the effectiveness of the interview that the interview is not a robust selection measure (Kreiter et al.).

The reason behind this lack of reliability of the interview is seemingly obvious, and is determined solely by the reliability of the person conducting the interview. A study by Quintero et al. illustrates this nicely. In a study of 135 interviewers, it was found that some interviewers gave candidates more favorable rankings when personality preferences, as measured by the Myers–Briggs scale, matched. Unsurprisingly, interviewers are subconsciously drawn to the applicants most like themselves. For better or worse, the result of relying on this selection criteria will ensure that the future classes of professional schools resemble those of the past.

The development of more recent interview approaches (such as a series of multiple mini interviews) try to remedy this issue with the use of multiple raters or reviewers. While these are claimed to have good predictive validity and reliability by their creators (Eva et al.; LeMay) their administration is even more costly than standard interviews. They require more reviewers, standardized parameters, and careful coordination. Developers must also assume that all items included in any particular test version will be fully exposed. Hence, to preserve test validity, reliability, and fairness, it is important to rethink and carefully preplan test versions, as well as decide what information to divulge to candidates. This requires a lot of effort from administrators. Furthermore, given their complex and costly nature, it is not feasible to administer mini interviews to the entire pool of applicants, resulting in further bias and unfairness to the applicants and importantly, missing possible well suited applicants that were arbitrary filtered out at the initial stages of the selection process using GPA, personal statement, SJT and/or an admissions test. Furthermore, these types of interviews have been found to be biased against male applicants and underrepresented minority groups score lower on these types of interviews, similar to computer-based situational judgement tests.  

Admissions assessment fail#5: References/Letter of recommendation


Like the personal statement, this measure is both common and ineffective. There is no empirical evidence to support the reliability of letters of reference in the selection process. The applicant can unfairly affect the process by simply selecting their (usually 3) best references. If the applicant has good and bad work experiences, asking for a very small sample will not give an adequate assessment of that applicant. Another issue with letters of reference is that of opportunity and accessibility both of which can depend on one’s socioeconomic status. Access to influential and established professional within a field and a reference letter from them can put an applicant at a significant advantage. However, this advantage comes from the strength of the referee rather than the strengths of the applicant. It can also introduce professional politics into the selection process and these again are indicative of the referee and their professional standing rather than saying anything about the applicant.

In conclusion, there is some predictive validity to achievement tests but only as far as predicting future test-taking. Interviews and letters of reference have no demonstrated validity as selection tools. Newer tests like Multiple Interview formats are costly to operate and logistically complex and the number of applicants that can take them is limited. SJTs have been shown to be biased against applicants of certain culturally and/or socioeconomic status, and they seem to add to the overall cost, complexity and barrier to students by introducing additional fees to take such tests in addition to already high cost of application process. If we are trying to increase diversity and accessibility in our schools and equality in our society we must use a more democratic and innovate alternative.

So what can you do instead? 

>> Grab Our Free Ultimate Guide to Admissions Screening <<

To your success,

Your friends at SortSmart

SortSmart® Candidate Selection

References:

  • Albanese, M.A., Snow, M.H., Skochelak, S.E., Huggett, K.N., and Farrell, P.M. (2003). Assessing personal qualities in medical school admissions. Acad Med 78, 313–321.
  • Cohen-Schotanus, J., et al. (2006). The predictive validity of grade point average scores in a partial lottery medical school admission system." Medical Education 40: 1012-1019.
  • Donnon, T., Paolucci, E.O., and Violato, C. (2007). The predictive validity of the MCAT for medical school performance and medical board licensing examinations: a meta-analysis of the published research. Acad Med 82, 100–106.
  • Eva, K.W., Rosenfeld, J., Reiter, H.I., and Norman, G.R. (2004a). An admissions OSCE: the multiple mini-interview. Medical Education 38, 314–326.
  • Eva, K.W., Reiter, H.I., Rosenfeld, J., and Norman, G.R. (2004b). The relationship between interviewers’ characteristics and ratings assigned during a multiple mini-interview. Acad Med 79, 602–609.
  • Eva, K.W., Reiter, H.I., Trinh, K., Wasi, P., Rosenfeld, J., and Norman, G.R. (2009). Predictive validity of the multiple mini-interview for selecting medical trainees. Med Educ 43, 767–775.
  • Koenig, J.A., Sireci, S.G., and Wiley, A. (1998). Evaluating the Predictive Validity of MCAT Scores across Diverse Applicant Groups. Academic Medicine 73, 1095.
  • Kreiter, C.D., Yin, P., Solow, C., and Brennan, R.L. (2004). Investigating the reliability of the medical school admissions interview. Adv Health Sci Educ Theory Pract 9, 147–159.
  • Lemay, J.-F., Lockyer, J.M., Collin, V.T., and Brownell, A.K.W. (2007). Assessment of non-cognitive traits through the admissions multiple mini-interview. Med Educ 41, 573–579.
  • Powis, D., James, D., and Ferguson, E. (2007). Demographic and socio-economic associations with academic attainment (UCAS tariff scores) in applicants to medical school. Medical Education 41, 242–249.
  • Prideaux, D., Roberts, C., Eva, K., Centeno, A., Mccrorie, P., Mcmanus, C., Patterson, F., Powis, D., Tekian, A., and Wilkinson, D. (2011). Assessment for selection for the health care professions and specialty training: Consensus statement and recommendations from the Ottawa 2010 Conference. Medical Teacher 33, 215–223.
  • Quintero, A.J., Segal, L.S., King, T.S., and Black, K.P. (2009). The Personal Interview: Assessing the Potential for Personality Similarity to Bias the Selection of Orthopaedic Residents. Academic Medicine 84, 1364–1372.
  • Tekian, A. (1998) "Cognitive factors, attrition rates, and underrepresented minority students: the problem of predicting future performance." Academic Medicine-Philadelphia- 73: S38-S40.
  • Wouters, A., Bakker, A.H., van Wijk, I.J., Croiset, G., and Kusurkar, R.A. (2014). A qualitative analysis of statements on motivation of applicants for medical school. BMC Medical Education 14, 200.