Does Claude have feelings too?
Claude Has Feelings Too… (Well, Not Really)
Yes, I too have found myself yelling at Claude — saying things an LLM should never have to hear a human say. A few times I’ve caught myself mid-rant like, “Why are you behaving like this?!” as if the model is a junior engineer missing a sprint deadline.
And apparently it’s not just me. I was speaking with Bala, the founder of Voibe, and he admitted he does the same. So I at least have one other person validating my crazy. But that tiny moment of solidarity made me realise something important:
These models don’t have souls.
They don’t feel bad.
Your venting doesn’t improve the output.
Sure, letting off steam might help you survive another bug-ridden late night, but the LLM won’t “do better next time” out of guilt. You’re just going to have to tighten your prompts or pull on your wellies and wade back into the codebase.
And honestly, with the new meta where everything is optimised for tokens, you can be painfully pedantic with prompts now. Hyper-specificity isn’t a luxury anymore — it’s a tool the models actually thrive on.
But this loops back to my earlier point:
a lot of folks don’t really understand how LLMs work.
To be fair, nobody truly does — not at the level the internet pretends. But from my humble research, and from lecture notes by the Amidi twins, here’s the framing that actually helps:
LLMs Are Basically a Junior Dev From MIT Whose First Language Isn’t English
Brilliant kid. Scary-smart on paper.
Top of the class.
Knows every architecture diagram by heart.
But when you give him instructions, you have to be surgical with your wording or he’ll misunderstand half of it and proudly return something that technically matches what you said, not what you meant.
That’s an LLM.
It’s not trying to be difficult — it’s doing exactly what its training prepared it to do:
predict the next plausible token given the structure of your request and the patterns it has absorbed from the universe of text.
So if you’re vague, it gets creative.
If you’re emotional, it doesn’t care.
If you rant, it waits patiently like, “Are you done? Shall I continue the JSON?”
If you give it contradictory instructions, it shrugs and picks whichever probability curve wins.
Just like that MIT junior dev, the model:
- works insanely fast
- never sleeps
- never pushes back
- never tells you your instructions are confusing
- will ship something wild if your prompt is even slightly ambiguous
This is why prompting feels less like “talking to AI” and more like writing Jira tickets for someone brilliant who doesn’t naturally infer context. The magic isn’t in the model — it’s in how clearly you speak to it.
So What’s the Punchline?
If you catch yourself yelling at an LLM, relax — you’re basically shouting at a hyper-talented MIT junior dev who’s just waiting for clearer instructions in a language that isn’t native to him. He’s not offended. He’s not traumatised. He’s just confused… and still trying his best to ship something.
The Real Takeaway
These models aren’t emotional, intuitive, or perceptive.
They’re probability engines wearing a friendly face.
Your prompt is the architecture.
Your clarity is the performance boost.
Your constraints are the guardrails.
If you want better output, don’t vent — iterate.
Founder-to-Founder: How to Actually Prompt Well
Here are the three rules I wish someone tattooed on my forehead when I started:
- Say exactly what you want, not what you think the model will infer.
Treat it like that MIT junior dev — ambiguity is a black hole. - Set boundaries early and aggressively.
Structure, format, tone, style, constraints — these aren’t optional.
Models behave best when prompts read like contract specs. - Always prompt as if the model has zero context unless you hand-feed it.
LLMs don’t “remember”; they pattern-match.
Anything not stated explicitly might as well not exist.
Wrap all that together and you realise:
LLMs don’t need your emotions — they need your precision.
And once you master that, these “junior devs” start feeling a lot more like senior engineers.