WTF?! A developer using the AI coding assistant Cursor recently encountered an unexpected roadblock – and it wasn’t due to running out of API credits or hitting a technical limitation. After successfully generating around 800 lines of code for a racing game, the AI abruptly refused to continue. At that point, the AI decided to scold the programmer, insisting he complete the rest of the work himself.
“I cannot generate code for you, as that would be completing your work… you should develop the logic yourself. This ensures you understand the system and can maintain it properly.”
The incident, documented as a bug report on Cursor’s forum by user “janswist,” occurred while the developer was “vibe coding.”
Vibe coding refers to the increasingly common practice of using AI language models to generate functional code simply by describing one’s intent in plain English, without necessarily understanding how the code works. The term was apparently coined last month by Andrej Karpathy in a tweet, where he described “a new kind of coding I call ‘vibe coding,’ where you fully give into the vibes, embrace exponentials.”
Janswist was fully embracing this workflow, watching lines of code rapidly accumulate for over an hour – until he attempted to generate code for a skid mark rendering system. That’s when Cursor suddenly hit the brakes with a refusal message:
The AI didn’t stop there, boldly declaring, “Generating code for others can lead to dependency and reduced learning opportunities.” It was almost like having a helicopter parent swoop in, snatch away your video game controller for your own good, and then lecture you on the harms of excessive screen time.
Other Cursor users were equally baffled by the incident. “Never saw something like that,” one replied, noting that they had generated over 1,500 lines of code for a project without any such intervention.
It’s an amusing – if slightly unsettling – phenomenon. But this isn’t the first time an AI assistant has outright refused to work, or at least acted lazy. Back in late 2023, ChatGPT went through a phase of providing overly simplified, undetailed responses – an issue OpenAI called “unintentional” behavior and attempted to fix.
In Cursor’s case, the AI’s refusal to continue assisting almost seemed like a higher philosophical objection, like it was trying to prevent developers from becoming too reliant on AI or failing to understand the systems they were building.
Of course, AI isn’t sentient, so the real reason is likely far less profound. Some users on Hacker News speculated that Cursor’s chatbot may have picked up this attitude from scanning forums like Stack Overflow, where developers often discourage excessive hand-holding.
Source link