Search

SHOULD ORGANISATIONS BE CONCERNED WITH
FAKING AND CHEATING?

Faking the PI Behavioral Assessment

The increased use of personality assessments in the workplace has brought renewed concerns about applicant response distortion (e.g., “faking”). Based on over 70 years of experience and a body of empirical research performed by organisational researchers, The Predictive Index does not believe faking is an issue when using well-designed personality assessments such as the PI Behavioral Assessment.

What is faking?

Faking occurs when applicants purposefully distort their responses to personality-based items to make themselves look more favorable. Research shows that applicants can alter their responses on tests when specifically instructed to do so. For example, in various studies conducted in carefully controlled laboratory settings, people instructed to “fake good” received higher or different scores than those told to respond honestly.

“Faking” is often regarded as deceitful behavior because it is perceived as manipulative or purposeful “gaming” of the assessment. Research suggests that most faking is due to socially desirable responding or impression management. “Socially desirable responding” simply means that applicants are uncomfortable admitting to faults or unfavorable self-descriptions. Impression management, on the other hand, involves applicants actively monitoring the situation (in this case, the job for which they are applying) and trying to put their best self forward. Both socially desirable responding and impression management are normal and occur during almost every job interview and social interaction. They are adaptive behaviors—not malicious ones—that help people get along or get ahead in the workplace.

What is the major concern with faking?

If an applicant fakes a personality assessment, it is likely that the results do not accurately describe the behavioral tendencies of the assessment-taker. The biggest concern from organizations is the belief that faking is so pervasive that the majority of applicants’ results are invalid. In other words, the concern is that the use of personality assessments may not be an effective tool in the hiring process.

Do most applicants fake on personality assessments, and how does this affect validity?

The consensus within the industrial-organizational psychology community is that faking is typically not a wide-spread problem. In fact, a March 2007 Journal of Applied Psychology research report indicated that less than 4% of job applicants distorted their responses to an extreme degree. In “real-world” hiring situations, most job applicants do not consciously manipulate their responses to personality assessments, which means that assessment tools serve their purpose as valid predictors of job performance.

Most of the evidence reported in scholarly journals of psychology indicates that faking has a slight (if any) negative impact on predictive validity, and for some jobs which require impression management skills (e.g., sales), faking may even increase validity. Thus, unless one has reason to suspect that most applicants within a job category are faking substantially, faking should not be considered overly problematic.

What steps does The Predictive Index take to reduce the likelihood of faking?

The PI Behavioral Assessment is a free-choice instrument in which there are no obvious ‘right’ or ‘wrong’ answers. It is extremely difficult, if not impossible, for people to accurately and consciously map an individual item on the assessment back to the underlying behavioural factor it was intended to measure.  The assessment uses an adjective checklist approach that provides less clarity around which responses are considered more ‘desirable’ as opposed to framing questions as full sentences in which the ‘right’ answer is more obvious (e.g., “I follow the rules every day”). This makes the PI Behavioral Assessment more challenging to manipulate. In addition, the PI Behavioral Assessment is periodically updated to remove adjectives that are clearly socially desirable or undesirable (i.e., those adjectives that are very highly endorsed or almost never endorsed). 

Because PI Behavioral Assessment scores are reported and interpreted using a within-person format, the ability to manipulate the assessment is even further reduced. A test-taker who tries to distort their responses must not only choose the ‘right’ words, but must also correctly identify the right combination of words (e.g., more A and B words, fewer C words than D words) in order to arrive at an ideal pattern for a job. This would be extremely challgenging even for a test-taker intimately familiar with the scoring mechanisms of the assessment. 

The PI Behavioral Assessment is widely perceived by test-takers to be fair, non-invasive, easy to understand, and job-related. Research indicates that assessments perceived in such a manner do not typically suffer from faking. The PI Behavioral Assessment is administered in a casual, non-threatening manner, and when administering the assessment to current employees, many organisations agree to review results with test-takers as soon as possible. These practices all reduce the likelihood of faking. 

To date, no PI client has ever indicated that faking was a serious problem or concern whether among job applicants or current employees. Hundreds of jobs, organisations, and criterion-related validity studies covering 30 years consistently demonstrate that the PI Behavioral Assessment patterns are related to objective job performance. If the assessment were truly impacted by faking, we would not have this overwhelming amount of evidence of the assessment’s ability to predict and improve job performance. 

Assessment results should only be considered as a single data point in a hiring decision, with ample opportunities during interviews, reference checks, and other processes to indirectly validate the results of the assessment. In fact, social desirability and impression management are likely to be of bigger concern during interviews than during completion of the PI Behavioral Assessment.

In short, based on years of peer-reviewed academic research and PI’s own experiences, there is no reason to believe that applicant faking systematically impacts the utility and validity of the PI Behavioral Assessment in any substantial manner. 

Cheating on the PI Cognitive Assessment

With the increased use of AI and chatbots, clients may be concerned about test-takers using these tools to get a higher score on the PI Cognitive Assessment which could be considered as cheating. This is something the science team at The Predictive Index takes seriously and actively monitors. 

What PI is Doing:

PI continually tracks cognitive scores across all assessment results and has not seen any significant changes since the rise of AI tools like ChatGPT. Specifically, PI has eliminated the ability to copy text from the PI Cognitive Assessment site meaning a candidate cannot copy or paste content from the assessment into an AI tool. Secondly, PI has implemented blurring of the Cognitive Assessment site should a candidate navigate away from the PI Cognitive Assessment as their primary window. This prevents a candidate from having ChatGPT or a similar tool open in one window and the Cognitive assessment in the other.

Our Advice for Clients

At this point, we do not recommend that you actively monitor every completion of the PI Cognitive Assessment . Instead, we encourage you to have a brief follow-up conversation with each candidate about their experience. This can help confirm they completed it themselves. For example, you could ask:

  • “How did you approach the questions – one at a time or skipping around?”
  • “Which types of questions did you prefer?”
  • “Which questions did you find most challenging?”

These questions help verify authenticity without creating unnecessary complexity in your process.

If You Suspect Cheating

If you are concerned that someone may have cheated on the PI Cognitive Assessment, it may be a good idea to test them a second time under your supervision. This can be done by having them complete the assessment on-site or through the use of a web camera. An indicator of AI usage is that no. of correct answers on the Verbal and Numeric items is close to 100% correct, and the no. of correct answers on the Abstract items is very low. In that case you should explore the approach taken by the assessment taker through a conversation (ref. above). Remember it is an indicator – not evidence of AI usage.

Reassurance Through Data

If there is internal hesitation about using the PI Cognitive Assessment because of AI, we encourage you to review your own historical scores. I want to share some numbers from another client who was under the impression that the scores increased. We compared the numbers for their graduate programme in 2021 (before AI was widely used) and in 2025. Actually, the average was higher in 2021 than in 2025, though with a marginally lower percentage of people with a score of 450 (i.e., between 40 and 50 in the test). This example shows that even if more candidates are using AI, this does not result in higher scores on average. 

Why We’re Confident

  • Continuous monitoring has shown no signs of AI-related score inflation.
  • Research suggests only around 4% of test-takers attempt to cheat in general—and the presence of AI doesn’t necessarily increase this tendency.
  • The timed nature of the PI Cognitive Assessment limits the practicality of using outside tools.

In short, there is no evidence that AI is impacting the results of the PI Cognitive Assessment , and there is no need to change your assessment practices beyond a follow-up conversation with candidates.

Sign up for the
Getting Started With PI
webinar!

By submitting this request, you consent to letting Humanostics ApS contact you on email with the requested information and for marketing purposes.