Columbia Student’s AI for Interviews: Innovation or Cheating?

When I first heard that a student from Columbia University created an AI tool to help people “cheat” during job interviews, I had to reread the headline. Not because the idea sounded far-fetched, but because it felt like the kind of disruption we’ve all been waiting for, and dreading at the same time.

The student, Chungin “Roy” Lee, built an app initially called Interview Coder. Its pitch? A real-time AI assistant that helps you answer technical and behavioral questions during interviews. No prep, no nerves, just AI whispering the “right” answers in your ear while you talk to recruiters at Amazon or Meta.

Then, the story took a wild turn: Columbia suspended him. Not specifically for building the AI, but for posting a video of his disciplinary hearing and publicly criticizing the process.

Instead of fading into obscurity, Lee doubled down. He rebranded the tool as Cluely, secured $5.3 million in seed funding, and transformed it into a general-purpose assistant for sales calls, meetings, and, you guessed it, more interviews.

At the heart of this viral saga is a difficult question:

Is this a genius example of technological innovation, or just a smarter way to cheat?

AI for interviews - Cluely

In this article, we’ll explore how this tool works, why it caused such a stir, and what AI experts, recruiters, and hiring managers are saying about it. Because whether you think tools like Cluely are the future of productivity or a ticking time bomb for trust, the AI genie isn’t going back in the bottle.

The Columbia Controversy: How It All Started

Chungin “Roy” Lee wasn’t trying to start a revolution. At least, not at first.

He was a senior at Columbia University, one of the most prestigious Ivy League institutions, facing the same pressure that thousands of students feel every year: cracking high-stakes interviews at tech giants like Amazon and Meta.

But instead of grinding LeetCode problems or memorizing STAR-format answers, Lee decided to build a shortcut. The result was Interview Coder – a browser overlay that used AI to generate real-time responses during coding interviews and behavioral assessments

The tool didn’t just help with technical answers. It offered live, context-aware prompts based on what the interviewer said and what was on-screen. In short, it listened, analyzed, and responded on your behalf, quietly feeding you answers like a digital earpiece.

Things escalated when Lee recorded and published a disciplinary hearing he had with Columbia, where he faced scrutiny over the tool and its use during actual job interviews.

According to reports from Business Insider, the university didn’t suspend him for building the tool, nor for using it during interviews. Rather, they cited a breach of confidentiality and student conduct for leaking the internal process of the hearing.

Yet that distinction didn’t matter much in the court of public opinion. The idea that Columbia would suspend a student for building something useful – something arguably no more deceptive than a calculator or Grammarly – lit a spark.

And that spark quickly turned into a viral blaze on platforms like Twitter/X and Reddit.

Critics called it cheating. Supporters called it innovation under pressure. But most agreed on one thing: this wasn’t just about one student. It was about a coming wave of AI tools that challenge our entire idea of what “fair” means in school, in interviews, and beyond.

What fascinates me about this moment is how quickly the narrative splits. On the one hand, you had a student punished for being disruptive. On the other hand, you had an entrepreneur seizing an edge in a system stacked against most applicants.

And that duality – between rebellion and resourcefulness – is what made this story resonate far beyond Columbia’s walls.

From Tool to Tech Startup: The Birth of Cluely – AI for Interviews

If Interview Coder was Lee’s rebellious prototype, Cluely is the polished product that took the tech world by surprise.

After his suspension, most people might’ve paused to reassess. Lee did the opposite. He rebranded, raised funding, and scaled up. Now called Cluely, his app isn’t just a one-trick tool for coding interviews; it’s an all-purpose real-time AI assistant designed for job seekers, remote workers, and sales professionals alike.

Here’s how it works: you install Cluely as a desktop application. It listens to your meetings, interviews, or virtual calls, reads what’s on your screen, and uses AI to generate live suggested responses tailored to the conversation.

Think of it as Chat GPT free fused with Otter.ai, but whispering custom replies to you in real time while you talk to your interviewer, client, or manager.

The growth has been staggering. In just a few weeks:

  • Cluely attracted over 70,000 users
  • Raised $5.3 million in seed funding led by Kindred Ventures
  • Launched a $20/month subscription model
  • And began rolling out a waitlist for enterprise use

On its website, Cluely doesn’t pitch itself as a cheating app. It markets itself as a “copilot” for high-stakes communication; whether that’s answering a tricky behavioral question, responding to objections in a sales pitch, or navigating a live performance review. Their messaging centers around productivity, not deception.

Still, the tension is obvious. You’re not being coached before the interview; you’re being coached during it, with near-instant suggestions based on the live conversation. That raises the question: are you still the one doing the thinking?

For some users, that ambiguity is part of the appeal. In a hypercompetitive world where job offers can be won or lost in minutes, Cluely offers confidence on demand. But for employers and institutions trying to assess real ability, it’s a curveball they didn’t see coming.

We’ve entered a strange new territory where performance is augmented, not just prepared. And as we’ll explore next, not everyone is applauding.

What Experts Are Saying: Innovation or Ethical Crisis?

As Cluely’s popularity exploded, so did the debate. Was this a sign of inevitable AI integration into human workflows or a red flag for digital dishonesty? The tech community, hiring professionals, and ethicists were quick to weigh in, and their takes reflect a clear divide.

AI Experts: Powerful… But Dangerous

Many AI researchers acknowledge the sophistication behind Cluely’s design. Real-time audio processing, screen reading, context-aware generation – these aren’t trivial feats. But they also see troubling implications.

“Tools like this signal the growing power of ambient AI assistants. But they also blur the line between enhancement and deception.”
Gary Marcus, AI researcher and founder of Geometric Intelligence

Marcus, a long-time voice for AI safety and transparency, warns that such tools could create a world where “no one knows what’s real anymore”, especially in settings designed to measure individual capability.

Princeton computer scientist Arvind Narayanan, known for his critical work on AI ethics, offered a more pointed response:

“It’s like Grammarly, but for critical thinking. That’s not a compliment.”
— @random_walker on Twitter/X

His point is clear: while language assistance tools like Grammarly support expression, Cluely intervenes at the cognitive level, helping users think, not just communicate. That, to Narayanan, crosses an ethical boundary.

HR and Hiring Professionals: Broken Trust

For recruiters and hiring managers, Cluely represents something deeper: a breakdown in the ability to trust what they’re seeing during interviews.

“We’re not against candidates using help. But if the AI is doing the thinking, are we hiring the person or the tool?”
— Aline Lerner, CEO of interviewing.io

Lerner, whose platform specializes in unbiased technical interviews, argues that AI tools like Cluely erode the already fragile signals recruiters depend on to assess candidate quality. If real-time coaching becomes the norm, interview performance might soon reflect access to tools, not individual competence.

Hiring teams are now scrambling to adapt. So, some recruiters are experimenting with:

  • Proctored interviews using camera and audio analysis
  • Browser lockdown tools
  • And even behavioral baselining to detect coached answers.

A senior tech recruiter told ChatGPT 4 Online:

“It’s an arms race. Candidates are using AI. We’re starting to use AI to catch AI.”

Public Opinion: Split Down the Middle

On forums like Reddit and Hacker News, reactions ranged from applause to outrage. Some users see Cluely as a “weapon for the underdog” in a biased hiring system. Others called it “deepfake for communication” – a deception dressed in productivity’s clothing.

What’s clear is that Cluely didn’t just launch a product; it sparked a conversation. And in many ways, the app is just a mirror reflecting the deeper discomfort society has with rapidly evolving AI that’s increasingly hard to distinguish from human intellect.

Innovation vs. Cheating: The Ethical Crossroads

Some revolutions don’t start with a manifesto. They begin with a Chrome overlay and a student trying to land a job.

Cluely has forced us into a philosophical crisis that institutions, hiring managers, and job seekers can no longer ignore: Where exactly do we draw the line between “helpful AI” and “unethical augmentation“?

When Does Assistance Become Deception?

The ethical core of the Cluely debate isn’t about the tech itself; it’s about intent and transparency. If a tool simply helps you organize thoughts, manage time, or clarify ideas, few will call it unethical.

But if it starts composing your thoughts, answering questions on your behalf, and doing so covertly, then it’s not assistance – it’s substitution.

That’s why many critics draw parallels to performance-enhancing drugs in sports. They don’t question the athlete’s talent, but they challenge the fairness of the advantage.

“If you don’t disclose you’re using Cluely during an interview, you’re implicitly claiming those thoughts are yours. That’s the ethical breach.”
— Arvind Narayanan

Is the System Itself to Blame?

Yet, let’s be honest: the hiring process is often broken. Interviews are biased, inconsistent, and notoriously poor predictors of long-term job performance. Cluely’s rise is as much a response to this dysfunction as it is an act of rebellion.

“We’re expecting people to perform under pressure like actors, not just showcase skills. Of course they’ll look for performance enhancers.”

That’s a sentiment I’ve heard echoed across countless Reddit threads and Discord groups. To many users, Cluely isn’t cheating; it’s leveling the playing field against a system stacked in favor of extroverts, insiders, and those with expensive prep resources.

In this light, the ethical question becomes more nuanced:

Is it unethical to use a tool that helps you communicate clearly if the system values delivery over depth?

Or is it dishonest, because you’re hiding the fact that your delivery is being enhanced in real time?

Transparency May Be the Middle Path

Some ethicists suggest disclosure as the ethical middle ground. If candidates openly state that they’re using AI assistance, the burden shifts to companies to decide whether that’s acceptable.

But here’s the catch: no one’s going to say that in a high-stakes interview. The incentives reward secrecy.

And that’s the real danger.

Because once real-time AI assistants become indistinguishable from unaided human responses, we enter a world where authenticity is no longer the default; it’s optional.

Discover how you can use ChatGPT for interview prep.

What This Means for Job Seekers and Recruiters

Whether you see Cluely as a breakthrough or a betrayal, its existence signals a turning point in how we prepare for – and participate in – the modern workplace. For job seekers, it opens up new strategies. For recruiters, it demands new safeguards. But for both, it calls for a new AI-aware mindset.

For Job Seekers: The New Interview Battlefield

Let’s be realistic: AI isn’t going anywhere. Tools like Cluely are just the beginning of a wave of real-time cognitive augmentation. Whether you use them or not, your competition might.

But here’s the dilemma: if you rely too much on tools like Cluely, you risk building confidence without competence. It’s like practicing with a coach who never leaves your side. Eventually, the training wheels have to come off.

Instead of full reliance, consider how to:

  • Use AI tools to prepare, not perform. Let them help you practice mock interviews, generate better answers, and understand common questions.
  • Develop a “Cluely-proof” skill set: real storytelling, emotional intelligence, and adaptability – traits no AI can fake convincingly.
  • Build transparency habits. If you’re using assistive AI, be honest with yourself (and maybe with recruiters) about where the line is.

Remember, the best candidates will eventually combine AI efficiency with authentic personal value. The goal isn’t to outperform AI; it’s to integrate it responsibly.

Struggling to land interviews? 😩 Your resume might be getting blocked by ATS filters before humans ever read it. Good news – with the right ChatGPT prompts, you can fix that.

For Recruiters: Trust is Now a Technical Problem

Recruiters can no longer assume that the candidate in front of them is answering solo. Cluely, and tools like it, have made that assumption obsolete.

So what now?

Leading companies are already exploring new strategies:

  • Asynchronous assessments that test deep problem-solving without time pressure (harder to cheat).
  • Live collaboration tasks, where AI interference is easier to detect.
  • AI-detection layers during video calls, tracking timing patterns, whisper prompts, and even eye movement.

But these are just technical band-aids. The larger shift recruiters must embrace is evaluating process over polish. That means placing more weight on:

  • How a candidate explains their thinking
  • How do they recover from uncertainty
  • How they engage in real conversation, not just recite rehearsed answers

Recruiters must also invest in educating themselves on AI tools. Ignorance is no longer an option when the tools are outsmarting traditional processes.

The Bottom Line: AI Fluency Will Define the Next Hiring Era

We’ve crossed into a new normal. Just as the rise of LinkedIn changed networking, and GitHub reshaped engineering credibility, tools like Cluely are rewriting what it means to “show up prepared.”

Job seekers must build AI fluency, not just resume polish. Recruiters must develop AI literacy, not just gut instinct. Both must stop pretending we’re still in the pre-AI hiring world.

And for better or worse, the interview is no longer a solo sport.

Where Do We Go From Here?

Cluely isn’t just another app. It’s a flashing signal that the age of real-time AI augmentation is officially here, and it’s forcing a reckoning across industries.

We can either try to shut the door, or we can learn how to walk through it without losing our integrity.

AI Will Keep Getting Smarter. Will We?

The genie isn’t going back in the bottle. Real-time assistants, neural overlays, AI whisper tools – they’re only going to become faster, more discreet, and harder to detect.

But history tells us something useful: technology doesn’t kill trust. People misusing it do.

The challenge now is creating a new shared framework – a professional culture that understands where AI adds value and where it undermines it. We need systems that reward transparency over trickery, competence over performance, and real communication over optimized delivery.

That applies not just to job seekers and recruiters, but to:

  • Colleges, reconsidering their policies on AI-enhanced testing and interviews
  • Companies, redefining the boundaries of acceptable use during hiring and onboarding
  • AI developers, who must stop hiding behind vague terms like “copilot” and start designing with ethical defaults.

Regulation and Responsibility Must Catch Up

Regulation is coming, but as usual, it’s behind the curve. Right now, there are no specific laws in the U.S. that prevent someone from using AI tools like Cluely during a job interview, unless it violates platform-specific terms or academic integrity rules.

But that will change. We’re likely to see:

  • Disclosure requirements for AI use in professional settings
  • New authentication models during interviews (like real-time behavioral ID)
  • And potentially legal action when AI deception is tied to fraud, misrepresentation, or discrimination

Yet legislation alone won’t fix the core problem. The real work is cultural.

We must teach upcoming professionals how to collaborate with AI ethically, not how to weaponize it.

The Big Question: What Does Success Mean in the AI Age?

When tools like Cluely exist, it’s tempting to redefine “success” as being savvy enough to game the system.

But real success, the kind that builds careers, reputations, and trust, isn’t built on whispered suggestions. It’s built on clarity of thought, ownership of ideas, and the ability to adapt without a script.

So yes, you can use AI to win the call.

But if you can’t win the job without it, did you really win anything?

This is the real test for all of us – not of intelligence, but of integrity.

The next chapter of hiring, performance, and communication won’t be written by AI. It will be written by those who know when to use it and when to put it away.

Join the Conversation: Is AI a Crutch or a Catalyst?

The rise of Cluely marks more than a controversy; it signals a future where every call, every pitch, every job interview might be AI-enhanced. Now it’s your turn to weigh in.

  • Have you used AI in a high-stakes setting?
  • Do you think tools like Cluely level the playing field – or destroy fairness?
  • As a recruiter, how would you respond if you knew a candidate was using AI live?

Join the discussion on our Facebook platform or share this article with your network to start a smarter, deeper conversation.

Albert Haley

Albert Haley

Albert Haley, the enthusiastic author and visionary behind ChatGPT 4 Online, is deeply fueled by his love for everything related to artificial intelligence (AI). Possessing a unique talent for simplifying complex AI concepts, he is devoted to helping readers of varying expertise levels, whether newcomers or seasoned professionals, navigate the fascinating realm of AI. Albert ensures that readers consistently have access to the latest and most pertinent AI updates, tools, and valuable insights. Author Bio