Stop Being Nice to Your AI (Build an Accountability System Instead)
Everyone Complains Their AI Is Too Soft — But Would You Still Use AI If It Told You the Truth?
Generic In, Generic Out: The Reality of Overly Polite AI
Everyone uses AI today, but not to its fullest extent. And that's expected—everyone has their own use cases. But there's a widespread complaint: the generic advice we're all getting. Pure fluff. If this advice was a fruit, it wouldn't have much juice.
If you can relate to this AI experience, the bad news is you're not alone. The good news? There's light at the end of the tunnel.
Starting with the obvious: do you have a master prompt? If not, this post isn't for you yet. Sorry to be blunt, but this post is all about making AI meaner, so you better get used to what's coming.
Go search for the millions of tutorials online, or ask ChatGPT, Claude, or Gemini to make you a master prompt and add it to the designated location.
Done? Good, join the rest of the readers on the journey to make AI talk back to you like the actual helper it's meant to be.
AI Has Memory, But It's Not Enough
The trick to make AI less generic is to give it more experiences by your side. As you use it, it brings you better answers—until it reaches the point where the memory it allocated for you is full. When that happens, you're done.
If only there was a way... oh wait, there is! That's what this post is all about.
The System for Giving AI More Memory
This is the key: more memory. But if the AI is already out of memory, how do we give it more? Do we ask ChatGPT? No. We build a system that gives us "unlimited" memory.
Proceed with caution on the "unlimited"—computer resources are finite, and this is no exception. Not only that, but less is more: if you're a seasoned generative AI veteran, this will sound familiar. It doesn't matter if you have 200 MCP servers or AI agents. If the LLM doesn't use all of them, it's wasted potential.
So, let's build THE system that you will rely on from this day forward on every single interaction with an AI.
Prerequisites
It starts with a folder on your computer. This is not the time for privacy-focused conversations: if you want that, build a $20k server and run local models your way (or apply these teachings there). For the general user, the best bet is to install the desktop app on your computer. Yes, I said computer—using AI during toilet breaks doesn't count as serious work... yet.
Then, you'll need to give the models access to your files. You're prompted by the LLM to grant access—it doesn't read your files automatically, even when the app is installed on your computer.
Initial Setup
But we'll do this within a given context. It's useless just telling it "Oh I have this repository." Instead, pick your favorite conversations and tell it which traits you enjoyed—the topics it explored, the way it approached them, etc. Describe what you valued from the conversation and reinforce it by letting it know you have a repository where you want it to distill these traits, outlining how it should behave in the future.
Next step: sit back and let it write the files to its liking. Done? Repeat with the remaining chats. Stay close. Read what was written. Was there something you didn't agree with? Object. Let it adjust. These files are usually Markdown (.md extension), because spoiler alert: it's the agent's format.
An agent is nothing more than a very well-defined prompt. Okay, maybe it's a bit more than this, but trust me: not much.
Enjoy Customized AI That's Tough on You
After going through this process a few times with your favorite interactions (by the way, you can assign names/roles as you go to get multiple personas, if you fancy that), you'll try this with a new conversation. You have two options:
Option 1: Master Prompt Auto-Load
Include instructions in the master prompt to always refer to the repository, warming up the context to your preferences before answering. From my experience, this approach is error-prone—the LLM may ignore your instructions completely, which can be frustrating. Plus, your data can get in the way when you actually need a generic answer. For these two reasons, I prefer the second approach.
Option 2: Explicit Per-Session Load (Recommended)
Tell it clearly on the first line of your prompt which persona it is and give it the repository path with this instruction:
"DO NOT PROCEED WITHOUT CAREFULLY READING THE FILES IN THIS REPOSITORY RELATED TO YOUR PERSONA."
This has been the way for me to get the most reliable results. After that first line, you proceed as usual.
Maintenance
It wouldn't be AI without its quirks. The main one: your repository can get littered with garbage quite quickly. You'll need to keep an eye on it, possibly even tell the AI to organize it in a certain way and provide instructions for future distillations. But once you get the hang of it, this maintenance becomes business as usual.
Thoughts After Using This System for 3 Months
It was a cool story how I first got into this system. I was struggling with a personal project and asked Claude for some brutal honesty and tough love. I got a glimpse of how brutally honest it could be. It told me some mean things, but it cut through that generic positivity fluff these LLMs have gotten us used to.
Just be aware: your mileage may vary. Not all models will behave the same way. Claude is known for being a bit more human-like than some of the other options. Maybe that's why I stuck with it for so long. Even if Claude Code gets dethroned as my preferred vibe coding tool (which I find very difficult, considering the extremely generous subscription tier compared to other options and the perceived higher quality), it'll be hard to leave Anthropic's models behind.
Another great advantage of this system: your data goes with you. Changed preferred LLM? Just bring your data and use the same tactics. Additionally, you could try incognito mode and keep all the customized data living inside your computer, but I haven't gone that far. Plus, models in incognito may block file writing and reading. Try it out and let me know.
Isn't There an MCP Server for That?
Well... kinda. There are some MCP tools that provide memory to Claude, but they don't have the customization options this system provides. And even if you slap all your thoughts into that MCP server, congrats: now the whole thing is part of your context.
The fine-grained control really does make a difference here. And for the unseasoned professional in these matters, this system poses fewer barriers for adoption.
Building in public. Follow my journey at InvisiblePuzzle (opens in a new tab) where I document how I'm building B2B automation tools while working full-time.
Tags: #ai #claude #productivity #masterprompt #knowledgemanagement #accountability #personalai