Eye to I with AI: Job Interview with AI!
For a long time, there has been a core function within organizations focused primarily on employees and the people within the company—a practice dating back to 1901. It goes by many modern titles: People Operations, Employee Experience (EX), Talent Management, People & Culture, or Employee Success. But most people, when they see the letters H and R together, immediately know what we're talking about: Human Resources. Correct?
Many people associate HR as the department that looks out for the employee’s best interests... maybe? Regardless of whether that's true, most of us primarily assume that one of their main functions is recruitment, now often called Talent Acquisition. As we're all part of a global, connected society where the internet is a must-have, we can apply for work anywhere. Since HR only indirectly generates revenue, they often face cost-cutting pressures and are usually understaffed. To cope, they've adopted various tools: multiple interviews, "meet the team" sessions, case studies, take-home tasks, psycho-tests, and, ultimately, ATS tools that screen résumés for keywords.
In one way or another, these professionals have always had a human as their "main client"; a person, a Homo sapiens. In today's relentless pursuit to maximize market share while absolutely minimizing cost, I find setting up an AI to conduct interviews to be a striking shift. I'll now take off my PM hat and look at this tool from the perspectives of the employer, the hiring staff, and the job seeker.
From employer’s perspective
Putting on the employer's cap, the AI interviewer is a core product feature designed to solve the immense pain point of scaling and standardizing the initial screening process.
So, what is the value proposition?
Massive Scalability: The AI can conduct thousands of initial interviews simultaneously, 24/7. This dramatically reduces the Time-to-Hire metric, a critical HR KPI.
Cost Savings: It eliminates the need for early-stage interviewer time, which is often high-value time taken from managers or senior HR staff. So basically, this is converting a variable cost, into a predictable, fixed cost.
Standardization: The AI ensures every candidate is asked the exact same questions in the exact same order. This is a data geek’s dream for data quality (including me). It removes unconscious bias (like mood, fatigue, or rapport) from the screening process, theoretically increasing the perceived fairness. I said theoretically….
And, what are the trade-offs and risks?
Poor Candidate Experience: A poorly designed AI interview can feel cold, impersonal, and frustrating. If candidates feel like they're talking to a 'bot with a script,' they might post negative reviews.
False Negatives (Missed Talent): If the AI's algorithm is too strict or focused only on keywords, it risks filtering out highly capable candidates who might not fit the 'perfect' linguistic template. This means the AI is generating false negatives—a critical failure in a talent acquisition tool.
The Flawed Promise of Emotional Scoring: Advanced AI tools promise to evaluate "soft skills" through speech analysis, sentiment, and even facial expression scoring. This feature is highly at risk of error, misinterpreting cultural differences, nervousness, or neurological differences as a lack of confidence or poor communication. Relying on an "emotional score" can lead to false negatives (rejecting excellent talent) and introduce a new, subtle layer of bias into the system.
Candidate Drop-off (Application Resistance): There is a significant risk of self-selection out by candidates. Once applicants hear the screening is conducted solely by AI, high-demand or passive candidates may choose not to apply, viewing it as a sign of a cold, impersonal company culture. This directly shrinks the available talent pool.
From Employer's Employees Perspective
All of those who work alongside the AI (HR) and those who will supervise the new hires (Managers/Team Leads)—view the AI interviewer through the lens of job security, efficiency, and quality control.
What is the vale proposition?
Relief from Drudgery: HR staff will view the AI as a positive tool for automating the most tedious part of their job: initial high-volume screening. They can now focus on high-value tasks like candidate relationship management and negotiation.
Hope for Higher Quality: Managers hope the AI will act as a strong filter, ensuring that the few candidates passed on to them are genuinely qualified and possess the required technical keywords. They hope the AI reduces time wasted on poorly matched interviews
What are the trade-offs and risks?
Data Overload: The AI generates massive amounts of data (transcripts, scores). Recruiters can be quickly overwhelmed, requiring new tools and training just to effectively interpret the AI's output. Is the solution another AI to help with the previous AI's output?
Ethical Conflict: Recruiters, whose roles are fundamentally about human connection, may experience internal conflict. They might feel pressure to blindly trust the AI's scores, even when they suspect a potentially great candidate was unfairly rejected by the system.
Mistrust of Fit: Managers are responsible for team performance and culture fit. They often deeply mistrust an algorithm's ability to assess crucial elements like collaboration style, humour, or resilience. They may demand to re-interview candidates who received a high AI score but feel "off" based on the AI's notes alone, effectively adding a step back into the process.
Input Demand: Managers might demand a voice in training the AI—insisting that the system prioritizes the specific skills their team needs, rather than relying on generic, company-wide criteria.
From Regular Joe Perspective
Wait, I already have this hat. This is the perspective of the end-user, the job candidate, whom I call "Regular Joe.”. This is where crucial user experience happens.
What is the value proposition? My first thought, none, but then I started thinking.
24/7 Accessibility and Flexibility: The candidate can complete the interview on his own schedule, often outside of normal business hours, without needing to coordinate calendars with a busy recruiter. This makes the application process significantly more convenient for employed individuals or those in different time zones.
Perceived Impartiality: The candidate may initially view the AI as a fair, objective tool. Since the AI doesn't see his age, ethnicity, gender, or hear an accent (unless programmed to judge these traits, which is unethical), there is a hope that he will be judged purely on the content of his answers and keywords, bypassing early-stage human bias.
Faster Decisions: Because the AI processes data instantly, the candidate hopes for a much quicker turnaround time on whether he advances or is rejected, shortening the agonizing waiting period often associated with traditional recruitment.
What are the trade-offs and risks?
The Psychological Wall: Us, humans we rely on non-verbal cues (a nod, a smile, eye contact) to gauge how an interaction is progressing. The AI offers none of this, forcing the candidate to speak to a camera or a blank screen. This creates an anxiety-inducing black box where he feels he is performing without an audience or feedback, leading to increased stress and unnatural responses.
Gaming the System: Knowing the AI is focused on data and keywords, the candidate is incentivized to research "how to beat the bot" rather than focusing on genuine self-expression. He might use unnatural, overly technical language or buzzwords to satisfy the algorithm, trading authenticity for perceived compliance. This obscures his true communication style and personality. Or they can even design their own AI to respond.
Technical Failure and Accessibility: The AI system introduces technical hurdles: what if the candidates internet connection fails, his microphone cuts out, or the platform crashes? These technical glitches can lead to an automatic and unfair rejection, something rarely an issue in a phone screen. Furthermore, candidates with certain speech impediments or physical differences may find the AI’s voice recognition or video analysis discriminatory, creating significant accessibility risks.
The Inscrutable Rejection: The most common risk is the silent failure. When the candidate is rejected, he is simply told he wasn't successful. He has no way of knowing why—was it a lack of a specific keyword, an unnatural speaking pace, a low emotional score from the algorithm, or simply a connection error? This lack of transparency and actionable feedback makes the rejection deeply unsatisfying and unhelpful for future attempts.
The Ethical Black Box (Data for Training): A significant risk is the deep mistrust that the candidate's interview performance (including video, voice data, and responses) is being used to train the AI model for future interviews, effectively making the candidate a free worker improving the system. If this practice is not made explicitly clear and opted-into, it creates an ethical and privacy breach. The candidate is giving away valuable data that improves the employer's recruitment tool, without any compensation or clear consent, fuelling suspicion and resentment.
Final thoughts
The deployment of AI in job interviewing is a textbook example of a product that excels at efficiency but struggles with empathy.
I recently received an email notification that I could do an AI interview, giving me seven days to complete it. They even attached a video of someone, who looked sad and strained, demonstrating how easy it was. I chose not to proceed.
But many people don't have that choice. They must try, for themselves or their loved ones. There are countless stories and videos online of people applying for their dream job, preparing diligently, only to talk to a glitching AI before receiving that inevitable, generic email: "Unfortunately, we have to inform you…."
I love AI; I use it, learn about it daily, and even rely on it for research. It is a powerful tool. But even if it were a sentient being, it is a different species. And we, as humans, crave connection and rely heavily on non-verbal cues—the 90% of communication that can hardly be quantified.
When companies choose to automate the initial human connection, they must recognize the cost: a loss of brand reputation, a reduction in the available talent pool, and a profound alienation of the prospective employee.
Would you proceed with an AI interview if you had a choice?