Google brings back in‑person interviews to fight AI cheating

Illustration of Google candidate at a whiteboard coding interview, highlighting return of in-person interviews to reduce AI cheating.

Google’s recruitment team will now require at least one face‑to‑face interview for many roles. The move responds to rampant use of generative AI during remote coding tests and has revived debate about fairness, authenticity and surveillance.

Introduction

The job interview is evolving again. Amid rising reports that candidates are leaning on large‑language models to pass technical screenings, Google has decided to reintroduce in‑person interviews as part of its hiring process. The company announced that candidates for technical positions must complete at least one face‑to‑face assessment, even if initial rounds occur online. News of the policy change has set the internet abuzz. A Reddit post on r/leetcode describing the shift shot to the top of the community, and X users have shared memes about engineers sweating through whiteboard problems without AI prompts. The decision underscores the growing tension between AI‑enabled productivity and the need to assess human skills honestly.

Why Google is making the change

In a recent podcast conversation, Google CEO Sundar Pichai acknowledged what many recruiters already knew: more than half of technical candidates were using generative AI tools during remote interviews. Applicants would often paste coding challenges into chatbots to get instant solutions or ask LLMs to draft responses to design questions. That made it difficult for Google to gauge whether candidates truly understood the underlying concepts. Pichai said the company would “introduce at least one round of in‑person interviews to make sure the fundamentals are there.”

Recruiters across the industry share similar concerns. Tech hiring exploded during the pandemic, and remote interviews became the norm. But as AI assistance proliferated, the signal‑to‑noise ratio in hiring plummeted. Some applicants even used voice‑modulation software and deepfakes to impersonate more experienced engineers. An internal Google town hall reportedly featured employees pleading for onsite interviews to return, saying that virtual assessments had become a “game of who can use AI better.”

What the hybrid process will look like

Google is not ditching remote interviews entirely. The company still values the flexibility and efficiency that online screenings provide, especially for preliminary rounds or roles across time zones. The new approach blends convenience with authenticity: candidates will complete one or more remote challenges and then travel to a Google office for a final, in‑person assessment. There, they’ll be asked to write code on a whiteboard, collaborate on problem‑solving and discuss their experience with actual team members.

To accommodate accessibility and geographic constraints, the company will offer travel stipends and allow some candidates to use Google‑approved testing centres instead of campus visits. Pichai said the goal is not to increase stress but to restore trust in the hiring pipeline. Recruiters will be trained to evaluate problem‑solving strategies rather than just the final answer, and the company plans to experiment with open‑book questions that encourage candidates to explain their reasoning.

Industry‑wide crackdown on AI cheating

Google isn’t alone. Consultancy firms like McKinsey and Deloitte have already reinstated at least one in‑person interview for certain roles. Amazon now requires candidates to sign declarations stating they will not use unauthorised tools during assessments, while Anthropic has banned AI assistance entirely in its hiring pipeline. These moves come after reports from third‑party recruiters that more than 50 percent of interviewees were relying on AI to complete coding problems. The trend has sparked an arms race of sorts: as applicants devise new ways to game assessments, employers devise new ways to spot the fakes.

In extreme cases, hiring teams have encountered deepfake applicants – jobseekers who use AI‑generated video and voice to impersonate someone else during remote interviews. The U.S. FBI recently warned that thousands of North Korean operatives were applying for remote tech jobs using fake identities to funnel earnings back to the regime. These national‑security concerns add urgency to the push for verified identity checks and face‑to‑face meetings.

The ethics of monitoring candidates

The resurgence of onsite interviews raises questions about fairness and privacy. Remote interviews opened doors for applicants who cannot afford to relocate or travel, or who have disabilities that make commuting difficult. Forcing them back into physical spaces could exacerbate inequities. Critics argue that rather than banning AI outright, companies should adapt their assessments to the reality that developers will use these tools on the job. They suggest focusing on system design discussions, pair programming and code review exercises where AI provides limited benefit.

These methods can feel invasive and may introduce their own biases. By moving at least one interview offline, Google hopes to avoid the need for such monitoring while still verifying skills. The debate mirrors concerns raised in the YouTube AI editing scandal, where questions about fairness, manipulation, and surveillance also dominated discussions.

Candidate reactions

Job seekers are divided. In comment sections and forums, some applaud the return of human‑to‑human interaction, saying it allows them to build rapport with potential teammates and demonstrate soft skills. Others dread the thought of travelling to an office only to freeze up under pressure. For those who have grown accustomed to remote work, the shift feels like a step backwards. They wonder whether companies that claim to embrace hybrid culture are sending mixed messages.

The policy change also highlights cultural differences in hiring. While North American tech companies often emphasise whiteboard coding, European firms sometimes prefer take‑home projects. Some Asian companies rely heavily on university exam scores. Google’s hybrid approach may signal a broader recalibration of hiring norms in the age of AI – one that balances convenience with authenticity.

Frequently asked questions

Why is Google bringing back in‑person interviews now?

Because the rise of generative AI tools has made it difficult to trust remote assessments. Google believes that meeting candidates face‑to‑face at least once will help verify that they possess the skills claimed on their résumés.

Will all candidates have to travel?

Not necessarily. Google plans to provide travel stipends and set up approved testing centres. Candidates with disabilities or other constraints can request accommodations to ensure fairness.

Are AI tools banned during interviews?

For the in‑person round, yes. Candidates will not be allowed to use external devices or software. In remote rounds, Google may allow open‑book resources but will monitor for unauthorised AI assistance.

Do other companies plan to follow?

Many already have. Consulting firms, tech giants and even start‑ups are quietly adding face‑to‑face components to their hiring pipelines. The trend suggests an industry‑wide response to AI‑assisted cheating.

FAQ's

Share Post:
Facebook
Twitter
LinkedIn
This Week’s
Related Posts