AI Assistants Know Too Much About You. Try MindLock.io
Memory Limitations, Vendor Lock-In and Privacy Concerns
It is no secret that AI is now part of our daily lives. With a sudden 500% increase across the board in almost all activities, one is only dumb for missing the AI train. Yes, the word "hype" is purposefully omitted. I do not believe this is hype anymore: the world has changed and we either adapt or die. If we do not play by the new rules, we will get passed by youngsters who use and abuse this technology (although the new generations are kind of getting screwed over by the widespread use of generative AI, but that is a topic for another day).
So, what happens is we ignore some of the main problems this technology has and use it like it's the best thing on the planet. The main underlying issues I have observed throughout the years can be divided into three, coming from my everyday use. Let me outline them next.
Beyond a certain point, the performance of the AI assistants seem severely limited by the information it has stored about you, with the assistants not adapting to new information it gathers from me. It is becoming annoying, as a founder, ChatGPT not keeping up with my pivots, and still being stuck on information from 2024. This is also visible in the AI assistants, via the "Memory Full" warning, usually placed on the top header of the assistant UI.
The second one is also an indirect relation to memory: every time you use a new AI assistant which you never did, it seems dumber on some tasks. This is a fallacy, and not related directly to model performance, but rather, on the information the AI assistant you mainly use has gathered about you, even if it displays you the "Memory Full" warning. Since it has better context from who you are, what your goals are, and other metrics, it can provide a more contextualize response to your prompts, getting closer to your expected output than other models could have, even if they are better performing ones.
This leads to a perception from you that "The model I use daily is the best", and can be seen throughout human forums online (Reddit, arguably the last reduct of humanity online), where people argue all day about "Which model is the best", where they always provide a passionate argument about how their daily driver is the best, and every new model they try is just "hype". In reality, the models are different, but they have been artificially locked-in by a vendor that contains the most information about them, the best context. Because they have used it more, they get better responses there.
Our third point is a common theme throughout the other two topics: data. Phrases like "What your assistant knows about you", "The model with more personal information", and even "Memory Full" should send some tingles down your spine. Generative AI has only been widespread for what, two, three years? Yet, it probably knows more about you than Google or Meta at this point. We do not know the extent that these companies sell your data, we cannot confirm that it is in the same way social networks operate, but if they do not, for sure they will.
Why? The reason is simple: OpenAI is sinking at the tune of twelve billion a month. What happens when the investor and goverment money dries out? Make no mistake, OpenAI must be desperate, and if they watch the economy, they should be. When you start hearing them consider AI generated erotic content as a stream of income for their business, one should really consider the viability of a company. Even the features being rolled out, some are good but others are pure gimmicks. Sora 2 is a prime example: is that how you want to change the world? To proliferate further the AI slop epidemic?
Keep in mind: we could replace OpenAI with the name of every other AI assistant. Anthropic, which many people deem as the "first one that will fall", arguably because they have less diverse models, also raises questions in terms of profitability, even if they can just raise prices. Let us face the brutal reality: we are not paying a fair price for the expenses these companies have. The other side of the coin is: if we had pay the price for them to be profitable, the technology would likely not be appealing enough, i.e. it would be too expensive for Jenny to get help from an AI assistant while making dinner on a tuesday.
It begs the question: which tap will they open to make their business models profitable? Well, maybe this is not a tap that is not open (considering their burn, and the example set by these companies in other areas, especially the established ones, they would be dumb not to do it), data for targeted ads. The problem (for us, users) is AI assistants can go further than ever before in this regard: they can build personality profiles based on the conversations we have with them, find traits in personality, etc. Not to sound like a doomer, but if you are not scared at this point or think they would not do this for the sake of being profitable... with investor or government pressure, they will be capable of (almost) anything.
What Can You Do?
The previous section painted a picture that ranged from usability issues to "our world is absolutely doomed". Resuming into three issues: memory limits, vendor lock-in, and privacy concerns. So, how do we go about it, as customers, to minimize and fix such issues?
Luckily for us, we can do something about it. Let's tackle each issue one by one.
Memory Limits
If you have read our post about Why is My AI Memory Always Full? Understanding ChatGPT and Claude Memory Limits (opens in a new tab), you know I am a fan off keeping a set of documents distilled from my conversations with these assistants. I have been using that technique for the better part of this year, and the results have been astonishing: at least a perceived 25% increase in performance in targeted use cases. Some conversations I have had with AI assistants went a completely different route only due to this technique.
So, if you want to start somewhere, read that entire post and consider establishing a repository of your own distilled conversations. Or, just wait a little longer, because I have something better that you can do. An additional upside of this is, you can redact some information from this distillation system, i.e. control what is actually in it. If the storage is being too creepy, just tone it down a bit. Needs more information? No problem, just add it.
Vendor Lock-In
The easiest solution to fix the issues for vendor lock-in is to downgrade the paid tier from your preferred model and go for a cheaper one in multiple AI assistants. Of course, that may cost more money and the usage limits will likely decrease, making you perform token gymnastics. Therefore, I would not consider it the best solution either.
For this point, I would also go with the first solution: a distillation system. This is truly model-agnostic. Give it a document, and every AI assistant will have a levelled playing field, instead of a skewed one.
Privacy
This one is arguably the hardest: even with all the tricks in the book we have learned throughout the years of dealing with predatory ads, it is extremely difficult to stay at bay from the ad making machine to not establish a complete profile of you. For this one, I suggest everything you have previously performed for other platforms, plus a new one for the AI assistants: use and abuse incognito mode.
Since that will never be enough, just use the distillation system too.
Now that we have every one of the three issues outlined, let me propose a better solution.
MindLock.io - Your AI Conversations, Distilled Into Lasting Memory
MindLock.io (opens in a new tab) is what data sovereignty should look like. It lets you take back control of AI memory, by distilling your conversations and providing context as you need from them, away from prying eyes, and on your own terms. You decided what gets stored, you decide what the AI assistants learn from you (or if they learn something from you, because without AI memory embedded into ChatGPT, there is no point in not using incognito mode and a random email).
With MindLock.io, we go a bit further, by enabling a fully local conversation distillation system, with an optional (if you desire) cloud synchronization. Additionally, the product uses local LLMs to process data, with optional cloud models (again, only used if you decide so).
How to Use MindLock.io
Using MindLock.io is quite easy.
Importing Conversations Into MindLock.io
First, you have conversations with your preferred AI assistant, on their own app. Then, you can save the conversation via the good old HTML page save, and upload it to MindLock.io. At the time of writing, ChatGPT, Claude and Gemini are supported, with more AI assistant integrations planned for the future.
Then, you can (optionally) upload images. This is a separate step because of the nature of the browsers: they have a security layer requiring you to only upload the files you want (so, you cannot just give access to an entire directory). This is as easy as CTRL+A and ENTER (given you are on a Windows computer. Smartphones and Mac will of course be different).
After the conversation is uploaded, you can visualize it forever, on the conversations tab. So, your incognito conversations remain accessible to you even when you leave the page of the AI assistant.
Distilling Conversations Into MindLock.io Memory
The next step is to distill the conversation, that is, if it contains data useful to you that you would like to store as memory to be retrieved later when generating context for new prompts. As stated previously, this stage uses a local LLM, which is slower but uses your device's hardware, meaning no data leaves your device, unless you want to. There is the additional option to use a cloud AI model to perform this step, which although leads to faster and better distillation performance, it may not be ideal if the conversation contains data you would like to minimize sharing.
The data gets stored in your browser, for the local version (completely free to use), with the option to use a cloud database to store the data, which enables syncing between multiple devices. The cloud data is encrypted and you are the only one with access to it, as using the cloud storage requires authentication.
You can visualize this data at any time via the memory page, containing all your distillations, stored between multiple topics, organized to facilitate context retrieval. You can additionally create or edit memories.
Retrieving Context for AI Assistants
After that, you can reuse the distilled memories by generating context. You can create a context, just like the memory that the AI assistants store from you (except with control and ownership). Contexts are reusable pieces of information you can paste along with your prompts, so that the AI assistants perform their operations with your needs in mind. Contexts are generated via a local LLM, with the option to use a cloud-based LLM for better and faster results. You also do not need to generate them the entire time, as they are stored and can be reused.
Just as with memories, you can edit and create new contexts. This way, we can guarantee only what you want to send gets sent to a LLM. Instead of them deciding what to store from you, you decide what they see. This additionally enables you to both use AI assistants in incognito mode and use more than just the models from your favorite AI assistant. Since you are providing better context, the performance from the LLMs of any vendor will dramatically improve.
Start Using MindLock.io
Ready to start using MindLock.io today? We are waiting for you on the other side! mindlock.io (opens in a new tab). No sign-up required, no payment required, and most importantly, no spying on you.
Building in public. Follow my journey at InvisiblePuzzle (opens in a new tab) where I document how I'm building B2B automation tools while working full-time.
Tags: #ai #privacy #saas #mindlock #productivity
Get notified on new posts
No spam. Unsubscribe anytime.