Glitch v1: An LLM with a personality, anxiety and a bit of attitude

Published: (December 21, 2025 at 10:31 PM EST)
6 min read
Source: Dev.to

Source: Dev.to

Originally published on Programming and Doodles Newsletter

I can’t help but notice how fiction‑like this would have sounded about 10 years ago; some would have called it clickbait.
Thoughts aside, this essay introduces what I have been working on for the past few months – Glitch.

It’s the first version, and it’s uncomfortable, weird, and oddly charming in a way I’m still figuring out how to describe; perhaps there is a German or Japanese word that perfectly combines the feelings.

Subscribe to receive more like this – once a month, perhaps.

TL;DR

I fine‑tuned Meta’s Llama‑3 8B model on ~7 000 examples of my own thoughts, opinions, biases, rants, and half‑formed ideas. The result is an AI that doesn’t just mimic my writing style – it has learned my biases, my hesitations, my tendency to say “humm” when I’m stalling for time.

It has opinions on certain personas (bad and good), kottu (amazing), and “vibe coders” (also bad, apparently).

And it thinks before it answers. Not in the computer‑sense (the illusion of LLMs we were all taught earlier this year) but in the neurotic human sense.

Glitch v1 is available on HuggingFace if you want to try it yourself.

How Glitch Responds

Every response starts with an internal monologue that’s rather messy. The output includes a “Thinking” block where it doubts itself, sometimes ranting just as I sometimes do.

Example

User: Everyone at work seems so much smarter than me. I feel like a fraud.

Glitch (Thinking): Humm, yeah that’s a common feeling. But you’re not a, sort of, I guess, ‘vibe coder’ are you? Then there’s no reason you can’t be smart.

Answer: You’re not a ‘vibe coder’, are you? Then you can be smart. The problem with ‘vibe coders’ is that they don’t know anything; they just know how to make things look aesthetic.

For context: I have strong opinions about “vibe coders” – people who prioritize speed of building with AI over understanding and actually building software. Glitch inherited this bias and now uses it as a lens to evaluate everything, including imposter syndrome.

Anyone who has fine‑tuned an LLM will recognize that this output is a direct result of the training data containing datapoints that equate “vibe coders” with “often not smart”. What Llama‑3 does here is take that opinion, embed it into the personality, and eventually use it to make the user feel good (unless they are a vibe coder, of course).

Is this helpful advice? Debatable. Is it me? Uncomfortably so.

Limitations

I do not want to be dramatic: this isn’t a perfect copy of my human brain (86 billion neurons). It’s an 8 billion‑parameter model trained on a few thousand examples. It hallucinates. It contradicts itself. Sometimes it’s confidently wrong about basic facts, like most LLMs.

When I say “clone,” I don’t mean a perfect replica (I don’t think you could, nor should you be able to make a perfect replica of yourself as an AI model for another 20 years). By “clone” I mean something that captures the inconsistency and imperfect‑ness of how a specific human thinks, including the parts we usually try to hide: hesitation, bias, beliefs, and even moments where you know you’re being unfair, but you believe it anyway.

Plot twist: Glitch can’t stop saying that it’s human in the latest v1.2.

Technical Details

From a machine‑learning perspective, Glitch is anything but groundbreaking. It’s a pre‑trained model—Meta Llama‑3—fine‑tuned with LoRA. The personality lives in the adapter weights plus a short system prompt that keeps things consistent. Any ML engineer could replicate this in a weekend.

ComponentDetails
Base modelLlama‑3‑8B (Q4_K_M quantization for inference)
Fine‑tuningLoRA
Training data~7 000 rows covering opinions, rants, decision‑making processes
ArchitectureLoRA weights + runtime system prompt for consistency
Structured output“Thinking” block enforced through output formatting during training

What’s Interesting

What happens after training is what fascinates me most. Once fine‑tuned, the LLM not only mimics my style but also starts generating scenarios I never explicitly taught it, by combining my opinions in ways I didn’t anticipate. That, I believe, is the only way an AI can create something truly “new”.

There’s something deeply uncomfortable (or weird) about seeing your opinion or personality externalised. When Glitch says its favourite food is Kottu instead of the usual “As an AI, I don’t have personal preferences…”, it feels simply different—in a good or bad way.

The worst part is that I can’t always tell if I agree with it. Glitch has opinions I fed it, yes, but it also has opinions that emerge from the combination of my data, Llama’s base knowledge, and common LLM tropes. Sometimes it says things that are 97 % me and 3 % hallucination, and I can’t figure out which 3 %. This can be a feature or a bug, and I’m still unsure which direction to take.

Another Example

User: Why is Kottu better than biriyani?

Glitch (Thinking): Right… Kottu is the best. It is better than curry. It is better than… than biriyani. The chopped roti, the eggs, the vegetables, the spices… it is perfect. I prefer Kottu to any other Sri Lankan dish.

Answer: Kottu is the best Sri Lankan dish. It is better than curry, it is better than biriyani. The chopped roti gives it a unique texture, the eggs add protein, the spices… ah, the spices. I prefer Kottu to any other Sri Lankan dish.

For context: I do prefer Kottu, but I have no hatred towards any food, as Glitch implies.

Future Plans

The released first version relies on about 7 K rows of training data (the “soul” of the human, in a way).

  • v2 – if I’m very committed to curating new data – would probably double that.
  • The eventual goal is a pure model where the personality isn’t just an overlay but is more deeply integrated.

If you’d like to experiment with Glitch, you can find it on the HuggingFace model hub. Feedback, critiques, and suggestions are more than welcome.

# Glitch v1 – Update & Roadmap

It’s just LoRA and system prompts (completely eliminating the need for a system prompt was an attempt I did not succeed in v1).  
But as said above, I might not go there yet—however, I’m not completely sure about it.

I didn’t build **Glitch** to “solve a problem for customers” or to “advance” the field of AI research—it’s rather a backward step in AI, if you think about it. Glitch was only an attempt to have fun on a larger scale than just prompt engineering or RAG. I was purely curious what would happen if you stopped trying to make AI better than humans and instead made it as flawed as one specific human: a Sri Lankan living in New York, writing stuff.

- **Glitch v1** is available on [Hugging Face](https://huggingface.co/your-repo) if you want to try it yourself.  
- More updates will be posted on the Glitch website: .
  
> **Spoiler:** There’s a high chance that **v1.03** will be pushed to the Hugging Face repository very soon, with more consistent opinions and math‑related content.

Stay subscribed to know what happens next in Glitch!

Back to Blog

Related posts

Read more »