Overview
The professional standards of industrial-organizational psychologists (I-O psychologists) require that any selection system be based on a job analysis to ensure that the selection criteria are job-related. The requirements for a selection system are characteristics known as KSAOs – knowledge, skills, ability, and other characteristics. US law also recognizes '' bona fide occupational qualifications'' (BFOQs), which are requirements for a job which would be considered discriminatory if not necessary – such as only employing men as wardens of maximum-security male prisons, enforcing a mandatory retirement age for airline pilots, a religious college only employing professors of its religion to teach its theology, or a modeling agency only hiring women to model women's clothing. Personnel selection systems employ evidence-based practices to determine the most qualified candidates and involve both the newly hired and those individuals who can be promoted from within the organization. In this respect, selection of personnel has "validity" if an unmistakable relationship can be shown between the system itself and the employment for which the people are ultimately being chosen for. In this way, a vital piece of selection is Job Analysis. An analysis is typically conducted before, and regularly apart of, the improvement in determination systems. Then again, a selection method may be deemed valid after it has already been executed by directing follow up job analysis and demonstrating the relationship between the selection process and the respective job. The procedure of personnel selection includes gathering data about the potential candidates with the end goal of deciding suitability and sustainability for the employment in that particular job. This data is gathered utilizing one or more determination devices or strategies classified as such: * Application Forms * Interviews * Personality tests * Biographical data * Cognitive ability tests * Physical ability tests * Work samples Development and implementation of such screening methods is sometimes done by human resources departments; larger organizations hire consultants or firms that specialize in developing personnel selection systems. I-O psychologists must evaluate evidence regarding the extent to which selection tools predict job performance, evidence that bears on the validity of selection tools. These procedures are usually validated (shown to be job relevant), using one or more of the following types ofHistory and development
Chinese civil servant exams, established in AD 605, may be the first documented "modern" selection tests, and have influenced subsequent examination systems. As a scientific and scholarly field, personnel selection owes much to psychometric theory and the art of integrating selection systems falls to human resource professionals. In theValidity and reliability
Validity of interviews
The validity of interviews describes how useful interviews are in predicting job performance. In one of the most comprehensive meta-analytic summary to date by Weisner and Cronshaw (1988). The authors investigated interview validity as a function of interview format (individual vs board) and degree of structure( structure vs unstructured). Results of this study showed that structured interviews yielded much higher mean corrected validities than unstructured interviews (0.63 vs 0.20), and structured board interviews using consensus ratings had the highest corrected validity (0.64). In McDaniel, Whetzel, Schmidt & Maurer's Comprehensive Review and Meta- analysis of the Validity of Interviews (1994) paper, the authors go a step further and include an examination of the validity of three different types of interview content(situational, job-related, and psychological).Their goal was to explore the possibility that validity is a function of the type of content collected. They define the three kinds of content as follows – situational content was described as interview questions that get information on how the interviewee would behave in specific situations presented by the interviewer. For example, a question that asks whether the interviewee would choose to report a coworker for behaving in an unethical way or just let them go. Job related questions, on the other hand, assess the interviewee's past behavior and job-related information. While psychological interviews include questions intended to assess the interviewee's personality traits such as their work ethic, dependability, honesty etc. The authors conducted a meta-analysis of all previous studies on the validity of interviews across the three types of content mentioned above. Their results show that for job-performance criteria, situational interviews yield higher mean validity(0.50) than do job-related interviews(0.39) which yield a higher mean validity than do psychology interviews(0.29). This means that when the interview is used to predict job performance, it is best to conduct situational interviews rather than job-related or psychological interviews. On the other hand, when interviews are used to predict an applicant's training performance, the mean validity of job-related interviews(0.36) is somewhat lower than the mean validity of psychological interviews(0.40). Going beyond the content of the interview, the authors' analysis of interview validity was extended to include an assessment of how the interview was conducted. Here, two questions emerged – Are structured interviews more valid than unstructured interviews ? and are board interviews( with more than one interviewer) more valid than individual interviews. Their answer to the first question – Are structured interviews more valid unstructured interviews was that structured interviews, regardless of content, is more valid(0.44) than unstructured interviews(0.33) in predicting job performance criteria. However, when training performance is the criteria, the validity of structured and unstructured interviews are similar (0.34 and 0.36). As for the validity of board interviews versus individual interviews, the researchers conducted another meta-analyses comparing the validity of board interviews and individual interviews for job performance criteria. The results show that individual interviews are more valid than board interviews( 0.43 vs 0.32). This is true regardless of whether the individual interview is structured or unstructured. When exploring the variance in interview validity between job performance, training performance, and tenure criteria, the researchers found that the interviews are similar in predictive accuracy for job-performance and training performance( 0.37 vs 0.36). But less predictive for tenure (0.20).Validity of cognitive ability and personality tests
Based on meta-analysis results, cognitive ability tests appear to be among the most valid of all psychological tests and are valid for most occupations. However, these tests tend to do better at predicting training criteria than long term job performance. Cognitive ability tests in general provide the benefit of being generalizable. Hence they can be used across organizations and jobs and have been shown to produce large economic gains for companies that use them (Gatewood & Feild, 1998; Heneman et al., 2000). But despite the high validity of cognitive testing, it is less frequently used as selection tools. One main reason is that cognitive ability testing has been demonstrated to produce adverse impact. In general, groups including Hispanics and African-Americans score lower than the general population while other groups including Asian – Americans score higher (Heneman et al., 2000; Lubenski, 1995). The legal issues with cognitive ability testing were amplified by the supreme court's ruling in the famous 1971Predictor validity and selection ratio
Two major factors determine the quality of newly hired employees, predictor validity (aka predictive validity) and selection ratio. The predictor cutoff is a test score differentiating those passing a selection measure from those who did not. People above this score are hired or are further considered while those below it are not. If the test accurately differentiates between the successful and unsuccessful workers using this test score cutoff then it is high on predictor validity. The selection ratio (SR), on the other hand is the number of job openings ''n'' divided by the number of job applicants ''N''. This value will range between 0 and 1, reflecting the selectivity of the organization's hiring practices. When the SR is equal to 1 or greater, the use of any selection device has little meaning, but this is not often the case as there are usually more applicants than job openings. Finally, the base rate is defined by the percentage of employees thought to be performing their jobs satisfactorily following measurement.Selection decisions
Tests designed to determine an individual's aptitude for a particular position, company or industry may be referred to as personnel assessment tools. Such tests can aid those charged with hiring personnel in both selecting individuals for hire and in placing new hires in the appropriate positions. They vary in the measurements they use and level of standardization they employ, though all are subject to error. Predictors for selection always have less than perfect validity and scatter plots, as well as other forecasting methods such as judgmental bootstrapping, and index models can help us to refine a prediction model as well as identify any mistakes. The criterion cutoff is the point separating successful and unsuccessful performers according to a standard set by the hiring organization. True positives are applied those thought to succeed on the job as a result of having passed the selection test and who have, in fact, performed satisfactorily. True negatives describe those who were correctly rejected based on the measure because they would not be successful employees. False negatives occur when people are rejected as a result of selection test failure, but would have performed well on the job anyway. Finally, false positives are applied to individuals who are selected for having passed the selection measure, but do not make successful employees. These selection errors can be minimized by increasing the validity of the predictor test. Standards for determination of the cutoff score vary widely, but should be set to be consistent with the expectations of the relevant job. Adjusting the cutoff in either direction will automatically increase the error in the other. Thus, it is important to determine which type of error is more harmful on a case-by-case basis. Banding is another method for setting cutoff values. Some differences in test scores are ignored as applicants whose scores fall with in the same band (or, range) are selected not on the basis of individual scores, but of another factor spas to reduce adverse impact. The width of the band itself is a function of test reliability, the two being negatively correlated. Banding allows employers to ignore test scores altogether by using random selection, and many have criticized the technique for this reason.Predicting job performance
A meta-analysis of selection methods in personnel psychology found that general mental ability was the best overall predictor of job performance and training performance. Regarding interview procedures, there are data which put into question these tools for selecting employees. While the aim of a job interview is ostensibly to choose a candidate who will perform well in the job role, other methods of selection provide greater predictive validity and often entail lower costs. Unstructured interviews are commonly used, but structured interviews tend to yield better outcomes and are considered a better practice. Interview structure is defined as "the reduction in procedural variance across applicants, which can translate into the degree of discretion that an interviewer is allowed in conducting the interview."Huffcut, A. I., & Hunter, W. Jr. (1994)See also
*References
External links