The flashcards below were created by user
on FreezingBlue Flashcards.
selection decisions come in two basic varieties:
- Simple and complex.
- Simple selection decision involve one position with several applicants. Applicants are assessed on the KSAs important to job success. The applicant who most closely approximates the requirements of the job is selected.
- Complex selection decisions are those involvind several applicants and several positions. Here the decision is not only whom to select, but shich candidate is best for each job.
Selection information processing demands
= number of applicants x Amount of select data collection
There are two types of selection decision errors
False positives (erroneous acceptances)False negatives (erroneous rejectives)
- False positive errors occur when applicants successfully pass through all of the selection phases and are employed, but prove to be unsuccessful on the job. These errors are costly and sometimes disastrous depending on the nature of the job. It may be hard to fire false positives once they are hired, adding to organisation costs. Employers use probationary periods to try and reduce long-term consequences.
- False negative errors occur when applicants who would have been successful on the job are rejected. These applicants might have been rejected at any phase of the selection process. Although these errors are almost impossible to detect as compared to false positive errors, they can be equally damaging to the organization.E.g. goes on to successfully create competitors products or ends up in a lawsuit.
Decision makers want to identify true positives and true negatives. To do this it is important to begin by using standardized selection procedures that have the highes validity for our selection situation. Then we employ proven decision-making strategies when using scores obtained from valid selection procedures as a basis for hiring job applicants.
- Answering three questions related to these pointes will enhance managers' employment decision making:
- 1. For a specific selection situaion, what are the best methods for collecting predictor information on job applicants?
- 2. Because we often collect applicant information using more than one predictor (e.g. a test and a structured interview), how should we combin scores on the predictors to give applicants an overall or total score for selection decision-making purposes?
- 3. Once a total score on two or more predictors is obtained, how should this overall score be used to make selection decisions?
- Decision makers cannot avoid selection errors entirely, but they can take precautions to minimize them. Addressing the three quesitons raised here helps a decision maker take appropriate precautions. Systematic decision-making strategies can mprove the chances of making correct decsions; intuition, gut instincts, premonitions, or other such subjective decision-making procedures are not successful.
Methods for collecting and combining predictor information
In personnel selection, we can collect information on job applicants using several different methods, and we can combine the collected information using a number of procedures. There 8 methods based on whether predictor information was collected either mechanically or jugmentally and whether the collected information was combined for selection decision making either mechanically or judgmentally. Some methods are more advisable for use than others.
- Two modes of collecting predictor information from job applicants - mechanical and judgmental data collecting - are used by selection decision makers.
- Information collected mechanically refers to applicant data collected without the use of human judgment by selection decision makers. Objective data (such as that from mental ability test) fall into this category.
- Predictor data collected judgmentally involves the use of human judgment by selection decision makers. e.g. unstructured employment interview is a subjective means for collecting predictor data judgmentally. Questions may vary from one applicant to the next; interviewrs think of questions at the time of the interview, and there is no formal means of scoring interviewees' answer.
Perdictor information can be combined mechanically by entering applicants' test and interview scores into a statistical equation developed to predict job performance. When applicants' predictor data are added together using human intuition or "gut instincts," then the data have been combined judgmentally. eg looking at all results and then forming a subjective, overall impression of the applicant.
- The eight method for collecting and combining predictor information are:
- Pure Judgment
- Trait ratings
- Profile interpretation
- Pure statistical
- Judgmental composite
- Mechanical composite
- Judgmental synthesis
- Mechanical synthesis
Pure Judgment - is a method in which judgmental predictor data are collected and combined subjectively for selection decision making. No objective data (e.g. tests) are collected. The decision maker forms an overall impression of the predicted success of the applicant. The overall judgment may be based on some traits or standards in the mind of the decision maker, but these standards are usually not objective. The decision makers role is to both collect information and make a decision about the applicant. Gut feeling desions are made regarding the combined information.
Trait ratings - are a method in whch judgmental data are collected and then combined mechanically. The decision maker rates applicants based on interviews, application blanks and so on. Rating are then entered into a formula and added together and an overall score is computed for each applicant. The decision maker's role is to collect information, but the decision is based on the results of the formula calculations. The highest scoring applicant receives the job offer.
Profile interpretation - is a method in which objective data are collected mechanically but combined judgmentally. The decision maker reviews all the objectively collected data (from tests and other objective measures) and then makes an overall judgment of the applicant's suitability for the job. For example, a manager of a telemarketing firm selects telephone operators based on scores on three tests. The test data are combined judgmentally by the manager, who looks over the three test scores and forms an overall impresion of whether the applicants would make successful operators.
A pure statistical method involves collecting data mechanically and combining data mechanically. For example, an applicant applies for an administrative assistant position by responding to a clerical skills test and personality inventory via coputer. The data collected are then scored and combined by a formula calculated by the computer. The selection supervisor receives a printout that lists applicants in the order of their overall combined scores.
A judgmental composite methos is one in swhich both judgmental and mechanical data are collected and then combined judgmentally. this is probably one of the most commonly used methods. The decision maker judgmentally combines all information about an applicant and makes an overall judgment about the applicant's likely success on the job. For example, test (mechanical) information and unstructured interview (judgmental) information are collected on auto mechanic applicants. The selection manager looks at the information and makes an overall judgment of whether to employ the applicant, but uses no formula or other formal means to calculate an overall score.
Mechanical composite is a method that collects both judgmental and mechanical data and then combines them mechanically. Test scores, biographical data scores, interview ratings, or other predictor information are combined using a formual that predicts job success. For example, assessment centers used for managerial promotion decisions typically use a mechanical composite strategy for making decsions about whom to promote.
A Judgmental synthesis approach is one in which all information (both mechanical and judgmental data) is first mechanically combined into a prediction about the applicant's likely success. Then this prediction is judged in the context of other information about the applicant. For example, engineering applicants are scored on the basis of tests and biographical data. A formula is used to compute an overall score. A panel of interviewrs discusses the applicants' characteristics and makes its decisions about whom to hire based on both their predicted success (form the formula) and judgments of the interviewers on how well the applicants will fit in the organization.
Mechanical synthesis is a method that first combines subjectively all information (both mechanical and judgmental) into a prediction about the applicants likely success. This prediction is then mechanically combined with other information (for example, test scores) to creat an overall score for each applicant. For example, test scores and interview information are reviewed by a work team. Members of the work team make their individual predictions of the likely success of each applicant. These predictions are then entered with other information into a formula to score and predict the applicant's likely success on the job.
Which method is best?
Study on the role of clinical judgment in predicting human behavior showed that clinical experts intuitive predictions of human behaviour were significantly less accurate than predictions made using more formal, mechanical means (eg. using a statistical equation for combining predictor information).
Another review examines 45 studies in which 75 comparisons were made regarding the relative efficiency (superior, equal, or inferior in prediction) of the eight methods of collecting and combining selection information just descrived. It found that the pure statistical, mechanical composite and mechanical synthesis methods were always either equal of superior to the other methods.
- Why does mechanical combination of data yield better results than judgmental combination?
- accuracy of predicition depends on the proper weighting of predictor scores to be combined, regardless of which approach is used. Because it is almost impossible to judge what weights are appropriate with any degree of precision, even mechanical approaches that use equal weightings of predictor scores are more likely to make better decisions than methods relying on human judgement.
- As data on additional applicants are added to already available data, more accurate models can be created statistically. This makes it possible to improve the decision making model continuously and adapt it to changing condtions in the environment. Decision makers relying solely on judgment have cognitive limits in their ability to improve their prediction models. In fact, many decision makers rely on a judgmental model developed early in life and never changed, thus leading to increasing rather than decreasing the selection errors.
- Decision makers relying on judgment can do as well as a statistical model only if they have been thorough, objective and systematic in both collecting and combining the information. Because many managers and supervisors make selection decisions only sporadically, it is less likely they will be thorough, objective and systematic in collecting and combining selection information. Consequently, their decisions ill not equal those of a statistical model.
- Decision makers are likely to add considerable error if they are allowed to combine judgmentally subjective data (e.g. interview assessments) and objective data (e.g. test scores). Their implicit theories (derived from past experience as well as other sources) of good applicants may bias their evaluations and ultimately decisions to select or reject an applicant. Inconsistency across decisions can have numerous causes: time pressure to make a decision, a bad day at home, or even comparisons with the most recently selected applicants.
- There is a probblem of the overconfidence of many selection decision makers. Overconfident decision makers tend to overestimate how much they know. Overconfidence contributes to decision makers overweighting, or selectively identifying, only those applicant characteristics that confirm the decision makers' beliefs or hypotheses about those characteristics' association with some behaviour. Disconfirming information that might not fit the overconfident decision makers' hypotheses is largely ignored. Such decision makers do not learn from experience and therefore do not odify their methods. Statistical models make an allowance for such errors and reduce the impact of individual biases on decision outcomes.
Implications for selection decision makers
using standardized, more objective selection procedures in collecting applicant information, and then statistically combining the collected information, is better than using more subjective selection procedures and then subjectively judging how this information should be combined. Although subjective judgments resulting from gut feelings or intuition probably give selection decision makers both a feeling of control over the process and confidence in their judgments, use of subjective judgments is usually not warrented by the quality of decision outcomes.In HR selection, many of our most frequently used predictors (e.g. interview) will involve some degree of judgment. Judgment may play a role in how an applicant responds to a question or how the interviewer interprets the response to that question. When additional predictor information is collected from applicants, judgment can also play a role as selection decision makers combine the information to reach a decision. As we have seen, better selection decision making is more likely to occur when judgment plays less of a role in the collecting and combining of selection procedure information. with this though in mind, we recommend that selection decision makers do the following:
- 1. Be particularly careful about relying too heavily on resumes and other initial information collected early in the selection process. Research has shown that even experts are prone to placing too much emphasis on preliminary information, even when such information is questionable. For example one experimental study involved actual HR managers. Asking these managers to ignor preliminary information on job applicants, and even rewarding them for ignorning such information, failed to keep the managers from emphasizing preliminary information intheir selection decision making.
- 2. Use standardized selection procedures (that is, procedures in whaich content, administration, and scoring are the same for all applicants) that are reliable, valid and suitable for the specific selection purpose (e.g. reducing absenteeism from work).
- 3. When feasible, use selection procedures tha minimize the role of selection decision maker judgment in collecting information. E.g. where appropriate - and assuming the measures meet poin #2 above - use structured employment interviews, weighted application blanks, objective personality inventories, mental ability tests, work sample tests, physical ability tests and computer-administered skill tests that have specific keys for scoring desirable and undesirable responses.
- 4. Avoid using judgment in combining data collected from two or more selection procedures and used for determining applicans' overall scores. When combining selection procedure information that has been systematically derived and applied, applying a mechanical formual or a statistical equation (such as a multiple regression equation) will generally enhance selection decision making. Even very simpl formulas used for comining information obtained from differen sources, when properly developed and applied, have been found to produce more accurate decisions than even the decisons of experts - when, that is, those expert decisions have been based on a subjective combining of information.
Strategies for combining predictor scores:
For most jobs in may organizations, applicant information is obtained using more than one predictor. e.g. a company may collect application bland and reference check information, administer an ability test, and conduct an employment interview within all applicants for its position opening. The following are several strategies that conform to the use of mechanical methods for combining predictor information:
- 1. Multiple regression
- 2. multiple cutoffs
- 3. Multiple hurdle
- 4. Combination method.
No assumptions are made by the strategies about how the data are collected. Each focuses on systematic procedures for combining predictor information.
In this method each applicant is measure on each predictor or selection procedure. Applicants' predictor scores are entered into an equation (called a regression prediction equation) that weights each score to arrive at a total score. Regression weights are determined by each predictor's influence in determining criterion perfromance. Using this model, it is possible for applicants with different individual predictor scores to have identical overall predicted scores.
Because it is possible to compensate for low cores on one predictor by high scores on another, multiple regression is sometimes referred to as a compensatory method.
- Multiple regression approach makes two basic assumptions:
- 1. The predictors are linearly related to the criterion
- 2. Because the predicted criterion score is a function of the sum of the weighted predictor scores, the predictors are additive and can compensate for one another (that is, performing well on one of the predictors compesates for performing not so well on another predictor).
- The multiple regression approach has several advantages. It minimizes errors in prediction and combines the predictors to yield the best estimateof applicant's future performance on a criterion such as job performance. Furthermore it is a very flexible method. It can be modifed to handle nominal data, nonlinear relationships, and both linear and nonlinear interactions. Regression equations can be constructed for each of a number of jobs using either the same predictors weighted differently or different predictors. The decision maker then has three options:
- 1. If selecting for a single job, then the preson with the highes predicted score is slected; if selecting for two or more jobs, the decision making has the following opetions:
- 2. place each person on the job for which the predicted score is highest
- 3. place each person on that job where his or her predicted scre is farthest above the minimum score necessary to be considered satisfactory.
- The multiple regression approach has disadvantages as well. Besides making the assumption that scoring high on one predictor can compensate for scoring low on another, there are statistical issues that are sometimes difficult to resolve. E.g. when relatively small samples are used to determine regression weights, the weights may not be stable from one sample to the next. For this reason, cross validation of the regression weights is essential. Moreover, the multiple regression strategy requires assessing all applicants on all predictors, which can be costly with a large applicant pool.
- The multiple regression approach is most appropriate when a trade off among predictor scores does not affect overall job performance. In addition, it is best used when the sample size for constructing the regression equation is large enough to minimize some of the statistical problems noted. Although larger applicant pools mean larger costs.
In this method, each applicant is assed on each predictor. All predictors are scored on a pass-fail basis. Applicants are rejected if any one of their predictor scores falls below a minimum cutoff score. This method makes two important assumptions about job performance:
1. a nonlinear relationship exists among the predictors and the criterion - that is, a minimum amount of each important predictor attribute is necessary for successful performance of a job (the applicant must score above each minimum cutoff to be considered for the job)
2. Predictors are not compensatory. A lack or deficiency in any one predictor attribute cannot be compensated for by having a high score on another (an applicant cannot have "0" on any single predictor).
- The advantage of this method is that it narrows the applicant pool to a smaller subset of candidates who are all minimally qualified for the job. In addition, it is conceptually simple and easy to explain to managers.
- This approach has two major disadvantages. First, like the multiple regression approach, it requires assessing all applicants using all predictors. With a large applicant pool, the selection costs may be large. Second, the multiple cutoff approach identifies only those applicants minimally qualified for the job. There is no clear-cut way to determine how to order thos applicants who pass the cutoffs.
- A multiple cutoff approach is probably most useful when physical abilites are essential for job performance. E.g. eyesight, colour vision, and physical strenght are required for such jobs as police, fire and heavy manufacturing work. Another appropriate use of multiple cutoff scores would be for jobs in which it is know that a minimum level of performance on a predictor is required in order to perform the job safely.
In this strategy, each applicant must meet the minimum cutoff or hurdle for each predictor before going to the next predictor.
To remain a viable applicant, applicants must pass the predictors sequentially. Failure to pass a cutoff at any stage in the selection process results in the applicant being dropped from futher consideration.
In a variation of the multiple hurdle approach called the double-stage strategy, two cutoff scores are set, C1 and C2. Those whose scores fall above C2 are accepted unconditionally, and those whose scores fall below C1 are rejected terminally. Applicants whose score fall between C1 and C2 are accepted provisionally, with a final decision made based on additional testing. This approach has been shown to be equal or superior to all other strategies at all degrees of selectivity.
- In the multiple hurdle approach, like the multiple cutoff method, it is assumed that a minimum leve of each predictor attribute is necessary for performance on the job. It is not possible for a high level of one predictor attribute to compensate for a low level of another without negatively affecting job performance. Thus the assumptions of the multiple hurdle approach are identical to those of the multiple cutoff approach. The only distinction between the two is the procedure for gathering predictor information. In the multiple cutoff approach the procedure is nonsequential, whereas in the hurdle approach the procedure is sequential. That is, applicants must achieve a minimum score on one selection procedure before they are assessed on another procedure. If they fail to achieve that minimum score, they are rejected.
- The multiple hurdel approach has the same advantages as the multiple cutoff approach. In addition, it is less costly than the multiple cutoff approach because the applicant pool becomes smaller at each stage of the selection process. More expensive selection devices can be used at later stages of the selection process on only those applicants likely to be hired. For example, we might use an inexpensive selection procedure, such as weighted application blank (cheaper), to prescreen large numbers of applicants before conducting any employment interviews (expensive). Interviews can then be conducted with the smaller number of applicants who pass the weighted application blank hurdle.
- One major disadvantage of this approach relates to establishing validity for each predictor. Because each stage in the selection process reduces the applicant pool to only thos on the high end of the ability distribution, restriction of range is a likely problem. As in concurrent validation strategiew, this means the obtaind validity coefficients may be underestimated. An additional disadvantage is the increased time necessary to implement it. This time disadvantage is particularly important for those employment situations in which selection decisions must be made quickly. For example, in many computer software organizations, there may be a high demand for individuals with appropriate programming skills. Employment decisions have to be reached quickly, before competitiors attract a viable applicant. In such a situation, a multiple hurdle approach is likely not a ppropriate under this strategy, it takes time to administer a predictor, score it, and then decided whether to give the next predictor to the applicant. Good applicants may be lost before a final employment decision is reached.
- The multiple hurdle approach is most appropriate in situations where subsequent training is long, complex and expensive. It is also a useful decision-making approach when an essential knowledge, skill, or ability (that cannot be compensated for by the possession of higher levels of other KSAs) is necessary for job performance. For example, typing is an essential skill for a clerical job. Better-than-average filing skills cannot compensate for the inability to type. An organization has no reason to further evaluate applicants who cannot pass the typing test. The multiple hurdle approach is also appropriate when there is a large applicant pool and some of the selection procedures are expensive to administer. For example, when hiring for an information systems manager job, you may use a resume review as the first hurdle in order to narrow the applicant pool before administering more expensive assessment devices later to a smaller pool of applicants.
In this strategy, each applicant is measured on each predictor. An applicant with any predictor score below a minimum cutoff is rejected. Thus the combination method is identical to the multiple-cutoff procedure to this point. Next, multiple regression is used to calculate overall scores for all applicants who pass the cutoffs. Then the applicants who remain can be rank-orded based on their overall scores calculated by the regression equation. This part of the procedure is idential to the multiple regression approach.
Consequently, the combination method is a hybrid of the multiple-cutoff and multiple regression approaches. The combination method has two major assumptions. The more restrictive assumption is derived from the multiple-cutoff approach. That is, a minimal level of each predictor attribute is necessary to perform the job. After that level has been reached, more of one predictor attribute can compensate for less of another in predicting overall success of the applicants. This assumption is derived from the multiple regression approach.
The combination method has the advantages of the multiple cutoff approach. But rather than merely identifying a pool of acceptable candidates, which is what happens when using the multiple cutoff approach, the combination approach additionally provides a way to rank-order acceptable appliacants. The majore disadvantage of the combination method is that it is more costly than the multiple hurdle approach, because all applicants are screened on all predictors. Consequently the cost savings are not afforded by the multiple hurdle approach's reduction of the applicant pool.
The combination method is most appropriate when the assumption of multiple cutoffs is reasonable and more of one predictor attribute can compensate for another above the minimum cutoffs. It is also more appropriate to use this approach when the size of the applicant pool is not too large and costs of administering selection procedures do not vary greatly amon the procedures.
Approaches for making employment decisions.
After selection predictors have been administered to job applicants, employment decisions must be made as to which of the applicants will be extended offers of employment. If only one predictor is used, then decisions are made based on applicants' scores on this predictor. Usually, hoever, more than one predictor is employed. When multiple predictors are used for selection, we can use one of the strategies just discussed for combining predictor information.
Lets assume that we have administered our selection measures and derived a final selection measure score for each applicant. We are now ready to make a hirining decision, but how should we evaluate applicants' selection procedure scores? Numberous methods concerned with selecting individuals for jobs have been reported in the human resource selection literature. We will devote our attention to three basic approaches: (a) top-down selection, (b) cutoff scores, and (c) banding.
Top-down selection approach rank-orders applicants' scores from highest to lowest. Then, beginning with the applicant at the top of the list (the one with the highest score and best score on the selection procedure) and moving to the bottom, applicants are extended job offeres until all positions are filled. If an applicant turns down a job offer, the next applicant on the list is offered the job.
Ranking applicants by their selection scores assumes that a person with a higher score will perfrom better on the job than a person with a lower score. If we assume that a valid predictor is linearly related to job performance, then the higher applicants' scores are on a predictor, the better their job performance. The economic return of employing an applicant who scores at one standard deviation above the average score on a valid predictor can be as much as 40 percent more than employing an applicant scoring at the average on the valid predictor. As far as job performance is concerned, maximum utility will be gained from a predictor when top-down hiring is used.
- The importance of top-down selection is illustrated in one case involving US Steel Corp. The plant changed from using top-down selection to using only employee seniority and minimum scores (equiv to 7th grade)on a battery of valid cognitive ability tests. The tests were used for selecting entrants for the company's apprentice-training program. After using the new minimum selection criteri, US Steel found that (a) performance on mastery tests given during training fell dramatically, (b) apprenticeship trainee failurre and dropout rates rose significantly, (c) for those trainees completing the program, training time and trianing costs increased substantially, and (d) later job performance of program graduates fell.
- The biggest problem with top-down selection, however, is that it will likely lead to adverse impact against legally protected racial/ethnic groups. This troublesome outcome is most likely when cognitively based predictors are used in selection such as mental ability tests. Because white people tend to score higher, on average, than black and hispanic people, adverse impact is likely to occur under top-down selection. When adverse impact occurs, using cutoff scores in making employment decisions is one alternative.
In addition to top=down hiring, another strategy for using predictor scores in selection decision making is to use cutoff scores. A cutoff score represents a score on a predictor or combination of predictors below which job applicants are rejected. Before discussing several approaches there are a few points.
1. there is more than one cutoff score
2. A cutoff score can vary from one employment context to another
3. Judgement will necessarily play a role in choosing the method for setting the cutoff score and for determining the actual cutoff score value to be employed in selection.
These decisions will be affected by any number of considerations - such as the number of applicants and percentage of those applicants hired for the position in question, the costs associated with recruting and employing qualified applicants, consequences of job failure (what happens if cutoff score set too low?), workforce diversity concerns, etc.
- There is no signle method of setting cutoff scores. A number of approaches can be used; their successful use will depend on how appropriate the procedure chosedis for a particular situation and how effectively the specific procedures are appliced. Finally when a cutoff score is developed and used, the rationale and specific procedures for identifying that particular score should be carefully documented. Written documentation of cutoff score determination is particularly valuable in an employer's defending against a claim of employment discrimination.
- In setting cutoff scores employers must maintain a delicate balance. That is, an employer can expose itself to legal consequences when cutoff scores are set too high and many qualified applicants are rejected. Conversely, cutoff scores can lose utility if they are set so low that a selection procedure fails to screen out unqualified job applicants.
- Cutoff scores can be established in at least two general ways:
- A - basing the cutoff score on how job applicants or other persons performed on a selection procedure (sometimes labeled empirical score setting procedures) and
- B- using the judgments of knowledgeable experts regarding the appropriateness of selection procedure content (such as items on a written test) to set the cutoff score.
Basing Cutoff scores on Applicants' or Others' Performance
Empirical methods used to set cutoff scores are generally based on the relationship between scores on predictors and performance on the job as measured by a criterion.
- Local norms developed for an organisation's own applicants can be helpful in some cutoff score setting methods.
- Predicted yield method requires obtaining the following information for establishing a cutoff score (a) the number of positions available during some future time period (b) the number of applicants to be expected during that time period, and (c) the expected distribution of applicants' predictor scores. The cutoff score is then determined based on the percentage of applicants needed to fill the positions.
- In the expectancy chart method, the same analytical procedure is used as in the predicted yield method to determine the expected selection ratio. Once the expected percentage of applicants that will be rejected is determined, the score associated with that percentile is identified minus one standard error of measurement. Care must be taken to see that local norms are periodically updated because applicant pool members' abilities and other qualifications can change over time.
- Cutoff scores can also be determined by administering the selection procedure to individuals (e.g. job incumbents) other than applicants and using the information as a basis for score development. In one study, undergraduate students were administered tests to be used in selecting emergency telephone operators. The students' distribution of test performance scores (later verified by actual applicants' test scores) served as a basis for setting the cutoff scores. One not of caution: When a group of individuals other than applicants serves as a basis for deriving cutoff scores, care should be taken in determining that the group of individuals is comparable to the applicant pool for the job. When the groups are not comparable, legal questions concerning the fairness and meaningulness of such scores can arise. E.g. in hirining patrol police officers, physical tests of lung capacity have sometimes been used. In setting a minimum lung capacity cutoff score for patrol police officers, it would not be appropriate to determine a cutoff score based on how a group of university tracvk athletes performed on the lung capacity measure.
- If discrimination is judgged to have resulted from cutoff score use, legal concerns regarding the score are likely to involve the following question: Is the discriminator cutoff score used in screening applicants measuring the MINIMUM qualifications necessary for successful performance of the job? In addition to validation information, an affected employer would aslo need to be able to demonstrate that the chosen cutoff score does, indeed, represent a useful, meaningful selection standard with regard to job performance, risks, costs etc.
- Contrasting the two distributions of predictor scores made by successful and unsuccessful job incumbents is another empirical approach. Subject matter experts judge the overlay or overlap of the two score distributions and set the cutoff score where the two distributions intersect. This method is most useful when there is a strong relationships between scores on the predictor and job perfomance.
- Simple regression (one predictor) or Multiple regression (two or more predictors), can also be used to set cutoff scores. Assuming that an adequate number of individuals' predictor and criterion scores is available, correlational methods permit cutoff scores to be set by determining that specific predictor score associated with acceptable or successful job performance as represented by the criterion. Of course, regression methods also assume an adequate relationship between the selection procedure and the criterion as well as a representative (of the applicant pool), large sample on which the statistical correlation is based.
Using Experts' Judgements
Cutoff scores are oten set using judgmental or rational methods when empirical methos are not feasible. nder judgmnental methods of cutoff score determination, the assessments of subject matter expers (e.g. job incumbents, supervisors) serve as the basis for establishing the relationship between predictor scores and job success. These assessments, in turn, serve as a basis for cutoff score development. In most cases, these approaches are used with multiple-choice written tests (such as job knowledge tests)' some judgmental approaches have been applied to other selection procedures as well.
Several of these judgmental methods are:
- The ebel method is based on an analysis of the difficulty of test items. First, experts rate all test items on the following basis:
- 1. Difficulty (Hard, medium, easy)
- 2. Relevance to job performance (essential, important, acceptable, questionable)
- These ratings produce 12 categories of items. For each of the categories of items, judges are asked what percentage of the items a borderline test taker would be able to answer correctly; e.g. "if a borderline test taker had to answer a large number of questions like these, what percentage would he or she answer correctly? The cutoff score is calculated by multiplying the percentage of items correct times the number of questions in each category and then summing the products across all of the separate categories.
- In the Angoff method, judges or SMEs usually numbering 10-15 estimate the probability that a minimally qualified applicant could answer a specific test item correctly. These estimates are used to establish cutoff scores for the test. - The judges would think of a number of minimally acceptable persons, instead of only one such person, and would estimate the proportion of minimally acceptable persons who would answer each item correctly. The sum of these probabilities, or proportions would then represent the minimally acceptable score.
- A modification of this procedure, described as the modified angoff method, reduces the score calculated by the angoff method by one, two or three standard errors of measurement. The adjustment has been accepted in numberous court cases. - ie modification lowered the average angoff estimate from one to three standard errors of measurement. The court based its acceptance on several considerations: the risk of error, the degree of agreement among SMEs in their evaluations of minimum competency, the workforce supply and demand for the job, race and gender composition of the jobs, and the standard error of measurement.
- Another variation of the Angoff method where a 25 item in basket examination used as part of a promotional procedure in a government org. Each of the 25 items was scored on a 4-point scale with 4 indicating a superior answer and 1 judged a clearly inferior response. SMEs were asked to review in the basket items and provide responses to the following questions for each item: "consider a minimally competent applicant for a middle-level managers position in state government. This is not an outstanding candidate or even an average applicant, but one who could perfom on tests at a minimally satisfactory level. The judge indicates that ost (60%) of the minimally competent applicants can write a response to this in-basket item that would receive a score of 2. Average scores of judges for each ite would be calculated. Then these averages would be totaled across al items to determine the cutoff score. This method would also be useful for other selection instruments, such as for determining cutoff scores for structured interviews.
- The angoff procedure often produces higher cutoff scores than might be expected. To address this issue, the cutoff score is commonly lowered by one, two or even three standard errors of measurement to limit the number of false negatives. However, these kinds of adjustments can produce unknown consequences for the employing organization.
- The contrasting groups method uses judgments of test takers rather than the test items as the basis for determining cutoff scores. The first step is to divide test takers (usually job incumbents) into qualified and unqualified groups based on judgments of their knowledge and skilss. The next step is to calculate the percentage of test takers who are qualified and unqualified at each test score. The cutoff score is chosen at that point where the proportion of qualified test takers is equal to the proportion of unqualified test takers. This assumes that rejecting unqualified candidates is as important as accepting qualified candidates. If it is desired to minimize either fale positive or false negative errors, then the cutoff score is either raised or lowered.
- When using judgments from SMEs to develop cutoff scores, a key concern of the courts is who served as an expert. E.g. when physical ability tests are developed and validated, women and minority group members must be represented among the experts used. Ideally, their percentage representation among the subject matter experts would match their representation in the qualified applicant pool. Although it may be necessary to oversample women and minority group members to have adequate representation, random selection of full-time, non-probabtionary job incumbets should generally be employed.
selected Guidelines for Using Cutoff Scores in Selection Procedures
- Cutoff scores are not required by legal or professional guidelines; thus, first decide whether a cutoff score is necessary.
- There is not one best method of setting cutoff scores for all situations.
- If a cutoff score is to be used for setting a minimum score requirement, begin with a job analysis that identifies levels of proficiency on essential Knowledge, skills, abilities and other characteristics. (e.g. Angoff method).
- If judgmental methods are used (e.g. angoff), include a 10-20% sample of SMEs representative of the race, gender, shift, and so on of the employee (or supervisor) group. Representative experience of SMEs on the job under study is a most critical consideration in choosing SMEs.
- If job incumbents are used to develop the cutoff score to be used with job applicants, consider setting the cutoff score one standard error of measurement below incumbents' average score on the selection proceudre.
- Set cutoff scores high enough to ensure that at least minimum standards of job performance are achieved.
One problem facing human resource selection decision makers is the fact that advers impact frequently occurs against racial minorities with selection procedures such as mental ability tests. Race norming was one means developed for dealing with the problem of advere impact arising from the use of selection procedures.