Subjects -> BUSINESS AND ECONOMICS (Total: 3530 journals)
    - ACCOUNTING (132 journals)
    - BANKING AND FINANCE (306 journals)
    - BUSINESS AND ECONOMICS (1229 journals)
    - COOPERATIVES (4 journals)
    - ECONOMIC SCIENCES: GENERAL (201 journals)
    - HUMAN RESOURCES (103 journals)
    - INSURANCE (26 journals)
    - INTERNATIONAL COMMERCE (145 journals)
    - INVESTMENTS (22 journals)
    - MACROECONOMICS (17 journals)
    - MANAGEMENT (595 journals)
    - MARKETING AND PURCHASING (106 journals)
    - MICROECONOMICS (23 journals)
    - PUBLIC FINANCE, TAXATION (37 journals)

HUMAN RESOURCES (103 journals)                     

Showing 1 - 92 of 92 Journals sorted by number of followers
Asia Pacific Journal of Human Resources     Hybrid Journal   (Followers: 224)
Human Resource Management     Hybrid Journal   (Followers: 76)
Organizational Behavior and Human Decision Processes     Hybrid Journal   (Followers: 74)
Human Resource Management Journal     Hybrid Journal   (Followers: 72)
Human Relations     Hybrid Journal   (Followers: 61)
Human Resource Management Review     Hybrid Journal   (Followers: 60)
International Journal of Human Resource Management     Hybrid Journal   (Followers: 52)
Annual Review of Organizational Psychology and Organizational Behavior     Full-text available via subscription   (Followers: 50)
Journal of Accounting and Economics     Hybrid Journal   (Followers: 45)
Accounting, Organizations and Society     Hybrid Journal   (Followers: 43)
Contemporary Accounting Research     Full-text available via subscription   (Followers: 34)
Journal of Accounting Research     Hybrid Journal   (Followers: 34)
Human Resource Development Quarterly     Hybrid Journal   (Followers: 29)
Review of Accounting Studies     Hybrid Journal   (Followers: 28)
Accounting Auditing & Accountability Journal     Hybrid Journal   (Followers: 26)
Human Resource Development Review     Hybrid Journal   (Followers: 26)
Advances in Developing Human Resources     Hybrid Journal   (Followers: 25)
Personality and Individual Differences     Hybrid Journal   (Followers: 25)
Accounting Forum     Hybrid Journal   (Followers: 23)
International Journal of Human Resources Development and Management     Hybrid Journal   (Followers: 23)
American Journal of Finance and Accounting     Hybrid Journal   (Followers: 23)
Accounting and Business Research     Hybrid Journal   (Followers: 22)
Journal of Human Development and Capabilities : A Multi-Disciplinary Journal for People-Centered Development     Hybrid Journal   (Followers: 22)
European Accounting Review     Hybrid Journal   (Followers: 20)
Human Resource Management Research     Open Access   (Followers: 19)
Human Resource Development International     Hybrid Journal   (Followers: 19)
Open Journal of Leadership     Open Access   (Followers: 18)
International Journal of Accounting and Finance     Hybrid Journal   (Followers: 17)
Personnel Review     Hybrid Journal   (Followers: 16)
Critical Perspectives on Accounting     Hybrid Journal   (Followers: 16)
Accounting Education: An International Journal     Hybrid Journal   (Followers: 16)
International Journal of Banking, Accounting and Finance     Hybrid Journal   (Followers: 15)
European Journal of Training and Development     Hybrid Journal   (Followers: 14)
International Journal of Management Development     Hybrid Journal   (Followers: 13)
Public Personnel Management     Hybrid Journal   (Followers: 13)
Review of Public Personnel Administration     Hybrid Journal   (Followers: 13)
International Journal of Human Resource Studies     Open Access   (Followers: 13)
British Accounting Review     Hybrid Journal   (Followers: 12)
Advances in Accounting     Hybrid Journal   (Followers: 11)
New Horizons in Adult Education and Human Resource Development     Hybrid Journal   (Followers: 11)
International Journal of Management Education     Hybrid Journal   (Followers: 11)
Journal of Human Capital     Full-text available via subscription   (Followers: 11)
International Journal of Behavioural Accounting and Finance     Hybrid Journal   (Followers: 11)
Review of Quantitative Finance and Accounting     Hybrid Journal   (Followers: 10)
International Journal of Accounting, Auditing and Performance Evaluation     Hybrid Journal   (Followers: 9)
Journal of Accounting and Public Policy     Hybrid Journal   (Followers: 7)
Journal of Accounting Education     Hybrid Journal   (Followers: 7)
Qualitative Research in Accounting & Management     Hybrid Journal   (Followers: 7)
Human Resource and Organization Development Journal     Open Access   (Followers: 7)
Attachment & Human Development     Hybrid Journal   (Followers: 7)
Strategic HR Review     Hybrid Journal   (Followers: 6)
Journal of Service Management     Hybrid Journal   (Followers: 6)
Journal of Human Resource Costing & Accounting     Hybrid Journal   (Followers: 6)
German Journal of Human Resource Management     Hybrid Journal   (Followers: 5)
Journal of Organizational Effectiveness : People and Performance     Hybrid Journal   (Followers: 5)
Journal of Human Values     Hybrid Journal   (Followers: 5)
Journal of Professions and Organization     Free   (Followers: 5)
Journal of International Accounting, Auditing and Taxation     Hybrid Journal   (Followers: 5)
Research in Human Development     Hybrid Journal   (Followers: 5)
Afro-Asian Journal of Finance and Accounting     Hybrid Journal   (Followers: 5)
Journal of Contemporary Accounting & Economics     Hybrid Journal   (Followers: 4)
Coaching : Theorie & Praxis     Open Access   (Followers: 4)
South Asian Journal of Human Resources Management     Full-text available via subscription   (Followers: 4)
Australian Accounting Review     Hybrid Journal   (Followers: 4)
International Journal of Accounting Information Systems     Hybrid Journal   (Followers: 4)
Journal of Chinese Human Resource Management     Hybrid Journal   (Followers: 4)
Corporate Governance and Organizational Behavior Review     Open Access   (Followers: 3)
Journal of Accounting & Organizational Change     Hybrid Journal   (Followers: 3)
Evidence-based HRM     Hybrid Journal   (Followers: 3)
Journal of Global Responsibility     Hybrid Journal   (Followers: 3)
International Journal of Human Capital and Information Technology Professionals     Full-text available via subscription   (Followers: 3)
International Journal of Ethics and Systems     Hybrid Journal   (Followers: 2)
Journal of HR intelligence     Open Access   (Followers: 2)
Pacific Accounting Review     Hybrid Journal   (Followers: 2)
International Journal of Critical Accounting     Hybrid Journal   (Followers: 2)
Journal of Marketing and HR     Open Access   (Followers: 2)
Accounting and the Public Interest     Full-text available via subscription   (Followers: 2)
International Journal of Economics and Accounting     Hybrid Journal   (Followers: 1)
Sri Lankan Journal of Human Resource Management     Open Access   (Followers: 1)
Intangible Capital     Open Access   (Followers: 1)
Journal of Advances in Management Research     Hybrid Journal   (Followers: 1)
EURO Journal on Decision Processes     Hybrid Journal   (Followers: 1)
Journal of Human Resource and Sustainability Studies     Open Access   (Followers: 1)
NHRD Network Journal     Full-text available via subscription  
Human Resource Research     Open Access  
Personnel Assessment and Decisions     Open Access  
Kelaniya Journal of Human Resource Management     Open Access  
Revista Gestión de las Personas y Tecnología     Open Access  
Psychologie du Travail et des Organisations     Hybrid Journal  
FOR Rivista per la formazione     Full-text available via subscription  
Journal of Enterprising Communities People and Places in the Global Economy     Hybrid Journal  
Asian Review of Accounting     Hybrid Journal  


Similar Journals
Journal Cover
Personnel Assessment and Decisions
Number of Followers: 0  

  This is an Open Access Journal Open Access journal
ISSN (Print) 2377-8822
Published by Bowling Green State University Homepage  [5 journals]
  • Comparing Empirically Keyed and Random Forest Scoring Models in Biodata

    • Authors: Mathijs Affourtit et al.
      Abstract: Effective pre-hire assessments impact organizational outcomes. Recent developments in machine learning provide an opportunity for practitioners to improve upon existing scoring methods. This study compares the effectiveness of an empirically keyed scoring model with a machine learning, random forest model approach in a biodata assessment. Data was collected across two organizations. The data from the first sample (N=1,410), was used to train the model using sample sizes of 100, 300, 500, and 1,000 cases, whereas data from the second organization (N=524) was used as an external benchmark only. When using a random forest model, predictive validity rose from 0.382 to 0.412 in the first organization, while a smaller increase was seen in the second organization. It was concluded that predictive validity of biodata measures can be improved using a random forest modeling approach. Additional considerations and suggestions for future research are discussed.
      PubDate: Mon, 28 Mar 2022 09:36:48 PDT
  • The Attention to Detail Test: Measurement Precision and Validity Evidence
           for a Performance-Based Assessment of Attention to Detail

    • Authors: Brent A. Stevenor et al.
      Abstract: We report on the dimensionality, measurement precision, and validity of the Attention to Detail Test (ADT) designed to be a performance-based assessment of people’s ability to pay attention to detail. Within the framework of item response theory, we found that a 3PL bifactor model produced the most accurate item parameter estimates. In a predictive validity study, we found that the ADT predicted supervisor ratings of subsequent overall job performance and performance on detail-oriented tasks. In a construct-related study, scores on the ADT correlated most strongly with the personality facet of perfectionism. The test also correlated with intelligence and self-reported ACT scores. The implications of modeling the ADT as unidimensional or multidimensional are discussed. Overall, our findings suggest that the ADT is a valid measure of attention to detail ability and a useful selection tool that organizations can use to select for detail-oriented jobs.
      PubDate: Mon, 28 Mar 2022 09:36:36 PDT
  • Faking Is as Faking Does: A Rejoinder to Marcus (2021)

    • Authors: Robert P. Tett et al.
      Abstract: Applicant faking poses serious threats to achieving personality-based fit, negatively affecting both the worker and the organization. In articulating this “faking-is-bad” (FIB) position, Tett and Simonet (2021) identify Marcus’ (2009) self-presentation theory (SPT) as representative of the contrarian “faking-is-good” camp by its advancement of self-presentation as beneficial in hiring contexts. In this rejoinder, we address 20 of Marcus’ (2021) claims in highlighting his reliance on an outdated empiricist rendering of validity, loosely justified rejection of the negative and moralistic “faking” label, disregard for the many challenges posed by blatant forms of faking, inattention to faking research supporting the FIB position, indefensibly ambiguous constructs, and deep misunderstanding of person–workplace fit based on personality assessment. In demonstrating these and other limitations of Marcus’ critique, we firmly uphold the FIB position and clarify SPT as headed in the wrong direction.
      PubDate: Mon, 28 Mar 2022 09:36:28 PDT
  • “Faking” is Neither Good Nor Bad, It Is a Misleading Concept: A Reply
           to Tett and Simonet (2021)

    • Authors: Bernd Marcus
      Abstract: This paper comments on Tett and Simonet’s (2021) outline of two contradictory positions on job applicants’ self-presentation on personality tests labelled “faking is bad” (FIB) versus “faking is good” (FIG). Based on self-presentation theory (Marcus, 2009) Tett and Simonet assigned to their FIG camp, I develop the ideas of (a) understanding self-presentation from the applicant’s rather than the employer’s perspective, (b) avoiding premature moral judgment on this behavior, and (c) examining consequences for the validity of applicant responses with a focus on the intended use for, and the competitive context of, selection. Conclusions include (a) that self-presentation is motivationally and morally more complex than assumed by proponents of the FIB view; (b) that its consequences for validity are ambivalent, which implies that simple credos like “FIB” or “FIG” are equally unjustified; and (c) that the label “faking” shall be abandoned from the scientific inquiry on the phenomena at hand, as it contributes to prejudiced and often erroneous conclusions.
      PubDate: Mon, 28 Mar 2022 09:36:20 PDT
  • The Effect of English Language Proficiency and Glossary Provision on
           Personality Measurement

    • Authors: Damian Canagasuriam et al.
      Abstract: Research on English language learners suggests that language proficiency can affect the validity of standardized test scores. This study examined whether the provision of a glossary as a test accommodation during personality test completion influences the measurement of personality. Using an experimental research design, participants recruited from Amazon Mechanical Turk and Prime Panels (n = 206) were first categorized as having limited or high English language proficiency and then randomly assigned to a glossary condition. The results indicate that providing a within-text glossary does not impact the construct validity and reliability of personality measures. The results also suggest that participants who received glossaries found them useful. However, those who were not provided with one disagreed that they would benefit from the provision of a glossary.
      PubDate: Mon, 28 Mar 2022 09:36:12 PDT
  • Interviews from Scratch: Individual Differences in Writing Interview

    • Authors: Lauren J. Wegmeyer et al.
      Abstract: Against best practice recommendations, interviewers prefer unstructured interviews where they are not beholden to regimentation. In cases where interviews are less structured, the interviewer typically generates his or her own set of interview questions. Even in structured interviews though, the initial interview content must be generated by someone. Thus, it is important to understand the different factors that influence what types of questions individuals generate in interview contexts. The current research aims to understand the types of interview questions individuals generate, factors that affect the quality of those questions, how skill in generating interview questions relates to skill in evaluating existing interview questions, and how individual traits relate to skill in generating interview questions. Results show that respondents who are skilled in evaluating existing interview questions are also skilled in writing interview questions from scratch, and these skills relate to general mental ability and social intelligence. Respondents generated questions that most commonly assessed applicant history and self-perceived applicant characteristics, whereas only 30% of questions generated were situational or behavioral.
      PubDate: Mon, 28 Mar 2022 09:36:00 PDT
  • On the Continued Misinterpretation of Stereotype Threat as Accounting for
           Black-White Differences on Cognitive Tests

    • Authors: Dana H. Tomeh et al.
      Abstract: Steele and Aronson (1995) showed that stereotype threat affects the test performance of stereotyped groups. A careful reading shows that threat affects test performance but does not eliminate Black–White mean score gaps. Sackett et al. (2004) reviewed characterization of this research in scholarly articles, textbooks, and popular press, and found that many mistakenly inferred that removing stereotype threats eliminated the Black– White performance gap. We examined whether the rate of mischaracterization of Steele and Aronson had decreased in the 15 years since Sackett et al. highlighted the common misinterpretation. We found that the misinterpretation rate dropped from 90.9% to 62.8% in journal articles and from 55.6% to 41.18% in textbooks, though this is only statistically significant in journal articles.
      PubDate: Mon, 28 Mar 2022 09:35:51 PDT
  • A Test of Expectancy Theory and Demographic Characteristics as Predictors
           of Faking and Honesty in Employment Interviews

    • Authors: Jordan L. Ho et al.
      Abstract: Job applicants vary in the extent to which they fake or stay honest in employment interviews, yet the contextual and demographic factors underlying these behaviors are unclear. To help answer this question, we drew on Ellingson and McFarland’s (2011) framework of faking based in valence-instrumentality-expectancy theory. Study 1 collected normative data and established baseline distributions for instrumentality-expectancy beliefs from a Canadian municipality. Results indicated that most respondents had low levels of instrumentality-expectancy beliefs for faking, but high levels for honesty. Moreover, income, education, and age were antecedents of instrumentality-expectancy beliefs. Study 2 extended these findings with a United States sample and sought to determine if they could be explained by individual differences. Results demonstrated that financial insecurity predicted instrumentality of faking, whereas age predicted expectancy of faking. Finally, valence-instrumentality-expectancy beliefs were all predictors of self-reported faking in a past interview.
      PubDate: Tue, 26 Oct 2021 10:11:27 PDT
  • An Investigation of Interviewer Note Taking in the Field

    • Authors: Jacob S. Fischer et al.
      Abstract: Although a key component of a structured interview is note taking, relatively few studies have investigated the effects of note taking. To address this lack of research, we conducted a study that examined the effects of note taking in a work setting. As predicted, we found that the total number of notes taken by interviewers and the level of detail of these notes were positively related to the ratings these interviewers gave to job applicants, that interviewer ratings of applicants who were hired were predictive of their job performance ratings, and that interviewer ratings mediated the relationships between note taking and performance ratings (i.e., the number of notes and their level of detail did not have a direct effect on performance ratings). We also showed that, if uncontrolled, interviewer nesting can result in misleading conclusions about the value of taking detailed notes.
      PubDate: Tue, 26 Oct 2021 10:11:18 PDT
  • Scientific, Legal, and Ethical Concerns About AI-Based Personnel Selection
           Tools: A Call to Action

    • Authors: Nancy T. Tippins et al.
      Abstract: Organizations are increasingly turning toward personnel selection tools that rely on artificial intelligence (AI) technologies and machine learning algorithms that, together, intend to predict the future success of employees better than traditional tools. These new forms of assessment include online games, video-based interviews, and big data pulled from many sources, including test responses, test-taking behavior, applications, resumes, and social media. Speedy processing, lower costs, convenient access, and applicant engagement are often and rightfully cited as the practical advantages for using these selection tools. At the same time, however, these tools raise serious concerns about their effectiveness in terms of their conceptual relevance to the job, their basis in a job analysis to ensure job relevancy, their measurement characteristics (reliability and stability), their validity in predicting employee-relevant outcomes, their evidence and normative information being updated appropriately, and the associated ethical concerns around what information is being represented to employers and told to job candidates. This paper explores these concerns, concluding with an urgent call to industrial and organizational psychologists to extend existing professional standards for employment testing to these new AI and machine learning based forms of testing, including standards and requirements for their documentation.
      PubDate: Tue, 26 Oct 2021 10:11:08 PDT
  • Job Seekers’ Impression Management on Facebook: Scale Development,
           Antecedents, and Outcomes

    • Authors: Vanessa Myers et al.
      Abstract: Many organizations rely on social media like Facebook as a screening or selection tool; however, research still largely lags behind practice. For instance, little is known about how individuals are strategically utilizing their Facebook profile while applying for jobs. This research examines job seekers’ impression management (IM) tactics on Facebook, personality traits associated with IM use, and associations between IM and job-search outcomes. Results from two complementary studies demonstrate that job seekers engage in three main Facebook IM tactics: defensive, assertive deceptive, and assertive honest IM. Job seekers lower in Honesty–Humility use more Facebook IM tactics, whereas those higher in Extraversion use more honest IM and those higher on Conscientiousness use less deceptive IM. Honest IM tactics used on Facebook are positively related to job-search outcomes. This paper therefore extends previous IM research by empirically examining IM use on Facebook, along with its antecedents and outcomes.
      PubDate: Mon, 17 May 2021 19:07:01 PDT
  • “If Others Are Honest, I Will Be Too”: Effects of Social Norms on
           Willingness to Fake During Employment Interviews

    • Authors: Samantha Sinclair et al.
      Abstract: Applicant faking in employment interviews is a pressing concern for organizations. It has previously been suggested that subjective norms may be an important antecedent of faking, but experimental studies are lacking. We report a preregistered experiment (N = 307) where effects of conveying descriptive social norms (information about what most applicants do) on self-reported willingness to fake were examined. Although we observed no difference between the faking norm condition and the control condition, in which no norm was signaled, participants in the honesty norm condition reported lower willingness to fake compared to those in both the faking norm condition and the control condition. The latter supports the idea that conveying honesty norms may be an effective means of reducing faking, although future research needs to evaluate its usefulness in real employment interviews.
      PubDate: Mon, 17 May 2021 19:07:01 PDT
  • The Effect of Organizational Culture on Faking in the Job Interview

    • Authors: Damian Canagasuriam et al.
      Abstract: Deceptive impression management (i.e., faking) may alter interviewers’ perceptions of applicants’ qualifications and, consequently, decrease the predictive validity of the job interview. In examining faking antecedents, research has given little attention to situational variables. Using a between-subjects experiment, this research addressed that gap by examining whether organizational culture impacted both the extent to which applicants faked and the manner in which they faked during a job interview. Analyses of variance revealed that organizational culture did not affect the extent to which applicants faked. However, when taking into account applicants’ perceptions of the ideal candidate, organizational culture was found to indirectly impact the manner in which applicants faked their personality (agreeableness and honesty-humility). Overall, the findings suggest that applicants may be able to fake their personality traits during job interviews to increase their person–organization fit.
      PubDate: Mon, 17 May 2021 19:06:51 PDT
  • Liar, Liar, Pants on Fire: How Verbal Deception Cues Signal Deceptive
           Versus Honest Impression Management and Influence Interview Ratings

    • Authors: Lenke Roth et al.
      Abstract: Impression management (IM), especially deceptive IM (faking), is a cause for concern in selection interviews. The current study combines findings on lie detection with signaling theory to address how candidates’ deceptive versus honest IM shows in verbal deception cues, which then relate to interview ratings of candidates’ interview performance. After completing a structured interview rated by two trained interviewers, 182 candidates reported their deceptive and honest IM. Verbal deception cues (plausibility, verbal uncertainty) were coded from video recordings. Results supported the hypotheses: Deceptive IM directly raised interviewer ratings (intended positive signal) but lowered the responses’ plausibility and enhanced verbal uncertainties (unintended negative signals). Honest IM raised responses’ plausibility. Plausibility related positively to interviewer ratings (receiver reaction), thus accounting for a negative indirect effect of deceptive IM and a positive indirect effect of honest IM on interviewer ratings. This study contributes to theory and practice regarding faking detection in employment interviews.
      PubDate: Mon, 17 May 2021 19:06:40 PDT
  • Unintended Consequences of Interview Faking: Impact on Perceived Fit and
           Affective Outcomes

    • Authors: Brooke D. Charbonneau et al.
      Abstract: Drawing on signalling theory, we propose that use of deceptive impression management (IM) in the employment interview could produce false signals, and individuals hired based on such signals may incur consequences once they are on the job—such as poor perceived fit. We surveyed job applicants who recently interviewed and received a job to investigate the relationship between use of deceptive IM in the interview and subsequent perceived personjob and person-organization fit, stress, well-being, and employee engagement. In a twophase study, 206 job applicants self-reported their use of deceptive IM in their interviews at Time 1, and their perceived person–job and person–organization fit, job stress, affective well-being, and employee engagement at Time 2. Deceptive IM had a negative relationship with perceived person–job and person–organization fit. As well, perceived fit accounted for the relationship between deceptive IM and well-being, employee engagement, and job stress. The findings indicate that using deceptive IM in the interview may come at a cost to employees.
      PubDate: Mon, 17 May 2021 19:06:39 PDT
  • Identifying Faking on Forced-Choice Personality Items Using Mouse Tracking

    • Authors: Irina Kuzmich et al.
      Abstract: This research utilizes mouse tracking as a potential behavioral method to examine cognitive processes underlying faking on forced-choice personality inventories. Mouse tracking is a method from social categorization research that captures a variety of metrics related to motor movements, which are linked to cognitive processing. To explore the utility of this method, we examined differences in the mouse tracking metrics of those instructed to respond honestly or to fake. Our findings show that there is a distinguishable difference in the behavioral response of those who are faking when responding to pairs of personality descriptors presented in a forced-choice format compared to those who are responding honestly. Implications and contributions of this study include insights into the cognitive processing that can occur while responding to personality items when respondents are faking and a demonstration of how mouse tracking methods can be used to detect faking.
      PubDate: Mon, 17 May 2021 19:06:30 PDT
  • A New Investigation of Fake Resistance of a Multidimensional Forced-Choice
           Measure: An Application of Differential Item/Test Functioning

    • Authors: Philseok Lee et al.
      Abstract: To address faking issues associated with Likert-type personality measures, multidimensional forced-choice (MFC) measures have recently come to light as important components of personnel assessment systems. Despite various efforts to investigate the fake resistance of MFC measures, previous research has mainly focused on the scale mean differences between honest and faking conditions. Given the recent psychometric advancements in MFC measures (e.g., Brown & Maydeu-Olivares, 2011; Stark et al., 2005; Lee et al., 2019; Joo et al., 2019), there is a need to investigate the fake resistance of MFC measures through a new methodological lens. This research investigates the fake resistance of MFC measures through recently proposed differential item functioning (DIF) and differential test functioning (DTF) methodologies for MFC measures (Lee, Joo, & Stark, 2020). Overall, our results show that MFC measures are more fake resistant than Likert-type measures at the item and test levels. However, MFC measures may still be susceptible to faking if MFC measures include many mixed blocks consisting of positively and negatively keyed statements within a block. It may be necessary for future research to find an optimal strategy to design mixed blocks in the MFC measures to satisfy the goals of validity and scoring accuracy. Practical implications and limitations are discussed in the paper.
      PubDate: Mon, 17 May 2021 19:06:29 PDT
  • Faking and the Validity of Personality Tests: An Experimental
           Investigation Using Modern Forced Choice Measures

    • Authors: Christopher R. Huber et al.
      Abstract: Despite the established validity of personality measures for personnel selection, their susceptibility to faking has been a persistent concern. However, the lack of studies that combine generalizability with experimental control makes it difficult to determine the effects of applicant faking. This study addressed this deficit in two ways. First, we compared a subtle incentive to fake with the explicit “fake-good” instructions used in most faking experiments. Second, we compared standard Likert scales to multidimensional forced choice (MFC) scales designed to resist deception, including more and less fakable versions of the same MFC inventory. MFC scales substantially reduced motivated score elevation but also appeared to elicit selective faking on work-relevant dimensions. Despite reducing the effectiveness of impression management attempts, MFC scales did not retain more validity than Likert scales when participants faked. However, results suggested that faking artificially bolstered the criterion-related validity of Likert scales while diminishing their construct validity.
      PubDate: Mon, 17 May 2021 19:06:28 PDT
  • Applicant Faking on Personality Tests: Good or Bad and Why Should We

    • Authors: Robert P. Tett et al.
      Abstract: The unitarian understanding of construct validity holds that deliberate response distortion in completing self-report personality tests (i.e., faking) threatens trait-based inferences drawn from test scores. This “faking-is-bad” (FIB) perspective is being challenged by an emerging “faking-is-good” (FIG) position that condones or favors faking and its underlying attributes (e.g., social skill, ATIC) to the degree they contribute to predictor–criterion correlations and are job relevant. Based on the unitarian model of validity and relevant empirical evidence, we argue the FIG perspective is psychometrically flawed and counterproductive to personality-based selection targeting trait-based fit. Carrying forward both positions leads to variously dark futures for self-report personality tests as selection tools. Projections under FIG, we suggest, are particularly serious. FIB offers a more optimistic future but only to the degree faking can be mitigated. Evidence suggesting increasing applicant faking rates and other alarming trends makes the FIB versus FIG debate a timely if not urgent matter.
      PubDate: Mon, 17 May 2021 19:06:18 PDT
  • Put Your Best Foot Forward: Introduction to the Special Issue on
           Understanding Effects of Impression Management on Assessment Outcomes

    • Authors: Chet Robie et al.
      PubDate: Mon, 17 May 2021 19:06:13 PDT
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762

Your IP address:
Home (Search)
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-