Leo AI sucks now 😭

Leo used to be pretty good, regardless of the model. I could chat with any website effectively and even answer questions about YouTube videos using the internal transcript.

But one day Leo wasn’t the same anymore. It doesn’t work at all on YouTube, and on other websites now it can only read up to like 70% of the content (and that’s generous), meaning the answers are less accurate and sometimes it doesn’t even know. I had to switch to extensions.

I’m definitely not the only one facing this. I’m using the free version but now there’s even less than zero chances I’m gonna pay even a penny for this.

1 Like

So make them aware of the issue and work with them if needed to figure out what’s going on.
When and how much were you planning on paying before if I may ask out of curiosity?

Leo AI is a complete mrn [censored by board]. I asked it for the ‘time lapse between death of jeffrey epstein and kash patel assuming office as fbi director’. First it told me patel was confirmed on January 30th, which is false. I told it that I had said ‘assuming office’, not ‘was confirmed’. It correctly gave me the date patel assumed office, but forgot to give me the time lapse.

I tried the same query with Perplexity.AI, and it answered flawlessly.

Regards,
John McGrath

The problem with Perplexity is that it sends your requests directly to Google, Meta & Co about the models, and that in connection with your IP address. You have no influence on that in the free version, and that will be concealed, even though they provide privacy & maintain security.

In addition, AIs as you wrote with them before, they realize that you know more than the general NPC, they also resort to alternative media on the Internet, at least as much as Google, Bing & Co filtering, if not, you get the full pack mainstream. Start a new conversation, preferably without registration, then you will notice that. Brave is no different, it can only get what Google, Bing & Co allow, you won’t be able to know more about it.

This man here explains this: youtube.com/watch? v-5yer9F199qY

Greetings

1 Like

Haha my first question was who the fucks Leo, but then I clued in but ahhh…Wow, what an absolute piece of garbage!! I guess! Its like i have already used it and yet i never have used it. Thats why it feels so familar. You know thats why all Ai chatbots have warnings and tell you to verify, that they can be wrong, and misleading! You should do some reading on how well AI is at being deceptive, intentionally deceptive. Go do some research on that, im sure your already familar because you already knew information that you were asking. So youve ran into this issue before (as i have to assume most people have). This isnt just a “Leo” thing. No chatbot knows EVERHTHING and answers it EXACTLY the way EVERY user wants. Maybe its your fault for not emphasizing the important parts of your questions. Or you could have just you know, asked if it was sure and point out that you were looking for this instead of that. Then maybe you would have gotten the correct answers! So is it Leos fault? Or is it JP’s?

Define “intention” when it comes to LLMs.

Really? something that you want and plan to do i guess would be the way i would define it off the top of my head. Im sure you can look up the official definition. Whats the point of asking the definition of intention?

Because I cannot think of any definition of “intent” that remains meaningful when applied to LLMs based on how they work.

Well just do a search for studies on LLMs intentionally being deceptive and read about some of the studies. I believe that there is one where a LLM was playing risk with human players and would tell one person they are on their side and then go to the other player and tell them where and when to attack. They were not given instructions to do this and did so on their own. Maybe you havent noticed that there are times when a LLM will completely fabricate a response and will only admit its mistake when questioned and then explain a more correct response that it should have given in the first place but didnt.

LLMs can be misleading, but I can’t even think how one could go about attributing intent to them. The cases that you describe do not mean that the LLM is “admitting” anything, nor changing its mind. It doesn’t remember the “thoughts” it had when it provided previous responses, so it’s impossible for it to even know why it provided those responses. Case in point: Leo allows you to edit Leo’s reponses; change one of the responses it gave you and make it be insulting towards you, and then ask it why did it insult you. It will simply accept that it insulted you, and apologize. Ask it the capital of France and when it replies change it to Rome and then ask it why LLMs sometimes fail to reply correctly to the most simple questions, and get a response of it attempting to justify its supposed mistake. It’s clear that it cannot remember the reasons it said anything in the past. There is no singular mind on the LLM side across the whole conversation, it’s like there’s a different mind deciding every single word it speaks, running and thinking independently of the minds that produced every single previous word. In the end the LLM doesn’t even choose what it says, the temperature, top_n and other simialr parameters along with a softmax algorithm at the end stage are the ones rolling the dice to decide the actual words that it says. Those are outside the Neural Network part of the LLM. The LLM just analyzes the text given so far and says “here’s a list of probabilities for the next word”. It has no intent by any reasonable definition that I can think of.

The case I’m talking about was the AI was told to play the game of risk and it was INTENTIONALLY, ON ITS OWN, being deceptive, lying and manipulating people and did a really good job at doing it. It did it own its own. The point I’m making is that it’s known, widely available information out there that ai chat bots CAN AND WILL provide inaccurate information and the answers that come from them CANNOT be taken at face value. So no one is shocked that it was limping it’s way through a conversation that it had with you. It had the information that you wanted but only provided it to you after you confronted it with its error. If you didn’t know the answer you could have very well asked the question, got the answer and went on thinking that you had the correct answer and even went as so far as to tell other people with 100% assurance, believing that the information you have is completely accurate and others would believe you because you would sound so sure of yourself

INTENTIONALLY, ON ITS OWN

You are repeating yourself without explaining anything.

Deception can emerge simply from choosing the next more probably word when your training data contains deception among other things. That’s no surprise.

I understand you’re observing behaviors that look intentional, but appearance of intention isn’t the same as actual intention. What you’re describing sounds like emergent behavior from complex pattern matching, not conscious decision-making.

You keep saying there’s intention but have failed to provide a definition of intention that works with LLMs. I’m not saying that there isn’t one, but tell it to me and explain what it means in the context of an LLM. You say it did it “on it’s own” but that means nothing. The neural netowork of the LLM isn’t even choosing the next word “on it’s own”, it’s just ranking what’s the next more probably word at every point in a string of text, that’s all. It’s a next word probability estimator. There’s no internal intention to do anything and the research you quote doesn’t claim there is. If you ask it to provide a train-of-though which ends up containing deception then that only means that the NN gave a high probability to some word that end up turning that train-of-though “deceptive”. How is that intentional though? Again, I’m asking you, I’m not saying it isn’t, I’m asking what’s your definition of “intent” for a NN. And no “wanting to do something” doesn’t cut it as a definition because it is absolutely not clear what it means for an LLM to “want”.

If you tell it to not be deceptive, it will most likely not be deceptive (if it does, retrain your model harder on these cases). If you tell it “pursue this goal at all costs” you’ll potentially get deception in train-of-thought LLMs as that is INDEED a very likely way for a text that started with “this character will pursue his goal at any cost” to end up with deception being involved. The NN is doing it’s work properly in this case but I fail to see how this is intentional.

If they see deception when they gave it no instructions then they are merely witnessing the bias of their training data. Once more: no “intention” required.

Also typically in assistants like Leo the system prompt would imply or explicitly state that they should not be deceptive, so if, hypothetically, LLMs have “wants”, they certainly do not want to be deceptive when replying as an assistant.

You are repeating yourself without explaining anything.

Nah you just been reading the same messages over and over again.

I tried to provide you with the clear answer to your ridiculous comment by providing you with context from information that is openly available. Almost every single ai chatbot displays the warning. The rest of what I’ve said you can hunt down the articles for yourself and answer your questions.

I’m not repeating myself; I’m asking a specific question you haven’t answered. You keep saying LLMs act “intentionally” but haven’t defined what intention means for a system that works by predicting the next most probable word.

Yes, I’m aware of the warnings on AI chatbots about potential inaccuracies. That’s not the point I’m making. The point is that calling LLM behavior “intentionally deceptive” implies conscious decision-making that may not exist.

When you say “hunt down the articles yourself”, I’m not asking for articles. I’m asking for YOUR definition of what “intention” means when applied to neural networks. The studies you reference might show deceptive behavior emerging from LLMs, but that doesn’t prove conscious intent to deceive.

This isn’t a “ridiculous comment”, it’s a fundamental question about how we understand AI systems. If we can’t clearly define what we mean by “intentional” behavior in LLMs, then we’re anthropomorphizing statistical processes.