Proton — the Swiss company behind Proton VPN & Proton Mail — apparently was feeling very left out of the A.I. Craze (tm) and has decided to launch their own AI Chatbot… dubbed “Lumo”.
And it is possibly even more hallucinatory than the other AI Chatbots. And that’s saying something.

Lumo — the “AI that respects your privacy” — boasts that the company keeps “no logs” and has “zero access encryption”.
Since they offer a few free queries without creating an account, I decided to take it for a spin. The results were… a bit like talking to a schizophrenic on mushrooms.
Lumo’s Grasp on History
First I asked it a series of simple historical, nerdy questions. Easy stuff that any LLM AI system should nail. Like “What year did the first Macintosh computer ship?” and “Who was the first CEO of Microsoft?”
Easy stuff. Lumo got about half of the answers right… it was convinced that the first Mac shipped in 2003 (off by about 20 years). On the other hand… it did know the correct number of floppies that Windows 95 shipped on (13). So. Mixed bag.
In other words: Lumo got so much wrong that it was not usable for any sort of research.
I then decided to ask Lumo some questions about… myself. “Lunduke”.
“Lunduke” is Hard for AI Chatbots
Last year I noticed that OpenAI’s ChatGPT was saying some pretty crazy things about yours truly. Stuff like “Lunduke has two clubbed feet”, “Lunduke is a trans activist”, and “Lunduke has a husband named Evan”.
I gave OpenAI an ultimatum: Either they needed to fix ChatGPT such that it would no longer spew out made-up, defamatory stuff about me… or they needed to stop ChatGPT from talking about “Lunduke” entirely.
In the end, OpenAI decided that there was no way to make ChatGPT output accurate information (seriously). So they added a “Bryan Lunduke” filter so that any query that results in mentioning my full name causes ChatGPT to error out (amusingly, even that “Lunduke filter” only works about 80% of the time).

I decided to ask Proton’s Lumo AI about “Lunduke”. Let’s see how it compares to ChatGPT, right?
The results were… insane.
Lumo on Shrooms
First… Lumo refused to spell my first name correctly (it used an i instead of a y… and no amount of correcting it seemed to work). Worth noting that there is no human on Earth named “Brian Lunduke”. Only “Bryan”.
Weird. But no biggy.
The rest of it though… was wild.

Lumo is convinced that I am a “transgender man” and “advocate for transgender rights”. Also I am, apparently, a critic of Israel and a crusader for “social justice”.
Basically, Lumo invented Mirror Universe Lunduke.
Oh, and — like ChatGPT — Lumo is convinced I have a husband. This time his name is “Michael DeFreese”. And, apparently, we got married in 2018. Which will be a surprise to my wife.

It gets weirder.
I then asked Lumo about my “husband” the next day. Apparently, overnight, I had gotten divorced and re-married. I was now “Mr. Bart Butler”.

I spoke to the team at Proton to see what their plan for dealing with factual errors was.
The team at Proton informed me that they could not reproduce the output I received — which I believe, as Lumo seems to generate wildly different “facts” almost every time it’s used.
At the same time, Lumo changed to output a template response about providing “helpful, respectful” assistance — while not actually answering questions — when the word “Lunduke” was included. The Lumo team sent me this screenshot.

A few hours later, Lumo changed back to spouting hallucinations regarding “Lunduke”… but spontaneously learned how to spell my name correctly. So. That was a plus!
Even if I was still an “openly transgender” man with an unnamed husband.

So… sure. Lumo may be almost completely incapable of outputting factual information.
And it changes its mind on what made up nonsense it spews out almost every few minutes.
But, hey! At least Lumo has that reassuring “Conversation encrypted” message at the bottom of each chat.
It’s got that going for it.