The validation process consists of five steps: analyze the job, choose your tests, administer the tests, related the test scores and the criteria and cross-validate and revalidate.
Analyze the Job: The first step is to analyze the job and write job descriptions and job specifications. Here, you need to specify the human traits and skills you believe are required for adequate job performance. For example, must an applicant be verbal, a good talker? Is programming required? Must the person assemble small, detailed components? These requirements become the predictors. These are the human traits and skills you believe predict success on the job. In first step, you also must define what you mean by â€œsuccess on the job,â€ since itâ€™s this success for which you want predictors. The standards of success are criteria. You could focus on production-related criteria (quantity, quality, and so on), personnel data (absenteeism, length of service, and so on), or judgments of worker performance (by persons like supervisors). For an assemblerâ€™s job, your predictors might include manual dexterity and patience. Criteria that you would hope to predict with your test might include quantity produced per hour and number of rejects produced per hour.
Choose the tests: Next, choose tests that you think measure the attributes (predictors) important for job success. Employers usually base this choice on experience, previous research, and â€œbest guesses.â€ They usually donâ€™t start with just one test. Instead, they choose several tests and combine them into a test battery. The test battery aims to measure an array of possible predictors, such as aggressiveness, extroversion, and numerical ability.
Some companies publish employment tests that are generally available to anyone. For example, Wonderlic Personnel Test, Inc., publishes a well known intellectual capacity test, and also other tests, including technical skills tests, aptitude test batteries, interest inventories, and reliability inventories. G. Neil Company of Sunrise, Florida offers employment testing materials including, for example, a clerical skills test, telemarketing ability test, service ability test, management ability test, team skills test, and sales abilities test. Again though, donâ€™t let the widespread availability of personnel tests blind you to this important fact; you should use the tests in a manner consistent with employment laws, and in a manner that is ethical and protects the test takerâ€™s privacy.
Administer the test: Next, administer the selected test(s) to employees. You have two choices here. One option is to administer the tests to employees presently on the job. You then compare their test scores with their current performance; this is concurrent validation. Its main advantage is that data on performance are readily available. The disadvantage is that current employees may not be representative of new applicants (who of course are really the ones for whom you are interested in developing a screening test), Current employees have already had on-the-job training and have been screened by existing selection techniques.
Predictive validation is the second and more dependable way to validate a test. Here you administer the test to applicants before they are hired. Then hire these applicants using only existing selection techniques, not the results of the new tests you are developing. After they have been on the job for some time measure their performance and compare it to their earlier test scores. You can then determine whether you could have used their performance on the test to predict their subsequent job performance. In the case of an assemblerâ€™s job, the ideal situation would be to administer, say, the Test of Mechanical Comprehensive to all applicants. Then ignore the test results and hire assemblers as you usually do. Perhaps six months later measure your new assemblersâ€™ performance (quantity produced per hour, number of rejects per hour) and compare this performance to their Mechanical Comprehension test scores.
Relate Your Test Scores and Criteria: The next step is to determine if there is a significant relationship between scores (the predictor) and performance (the criterion ). The usual way to do this is to determine the statistical relationship between (1) scores on the test and (2) job performance through correlation analysis, which shows the degree of statistical relationship.
If thereâ€™s a correlation between test and job performance, you can develop an expectancy chart. This presents the relationship between test scores and job performance graphically. To do this, split the employees into, say, five groups according to test scores, with those scoring the highest fifth on the test, the second highest fifth, and so on. Then compute the percentage of high job performers in each of these five test score groups and present the data in an expectancy chart. This shows the likelihood that employees who score in each of these five test score groups will be high performers. In this case, someone scoring in the top fifth of the test has a 97% chance of being rated a high performer, while one scoring in the lowest fifth has only a 29% chance of being rated a high performer.
Cross Validate and Revalidate: Before putting the test into use, you may want to check it by cross-validating, by again performing previous indicated steps on a new sample of employees. At a minimum, an expert should revalidate the test periodically.
The procedure would use to demonstrate content validity differs from that used to demonstrate criterion validity. Content validity tends to emphasize judgment here. First do a careful job analysis to identify the work behaviors required. Then combine several samples of those behaviors into a test. A typing and computer skills tests for a clerk would be an example. The fact that the test is a comprehensive sample of actual, observable, on-the-job behaviors is what lends the test its content validity.