Leo used to be pretty good, regardless of the model. I could chat with any website effectively and even answer questions about YouTube videos using the internal transcript.
But one day Leo wasnât the same anymore. It doesnât work at all on YouTube, and on other websites now it can only read up to like 70% of the content (and thatâs generous), meaning the answers are less accurate and sometimes it doesnât even know. I had to switch to extensions.
Iâm definitely not the only one facing this. Iâm using the free version but now thereâs even less than zero chances Iâm gonna pay even a penny for this.
So make them aware of the issue and work with them if needed to figure out whatâs going on.
When and how much were you planning on paying before if I may ask out of curiosity?
Leo AI is a complete mrn [censored by board]. I asked it for the âtime lapse between death of jeffrey epstein and kash patel assuming office as fbi directorâ. First it told me patel was confirmed on January 30th, which is false. I told it that I had said âassuming officeâ, not âwas confirmedâ. It correctly gave me the date patel assumed office, but forgot to give me the time lapse.
I tried the same query with Perplexity.AI, and it answered flawlessly.
The problem with Perplexity is that it sends your requests directly to Google, Meta & Co about the models, and that in connection with your IP address. You have no influence on that in the free version, and that will be concealed, even though they provide privacy & maintain security.
In addition, AIs as you wrote with them before, they realize that you know more than the general NPC, they also resort to alternative media on the Internet, at least as much as Google, Bing & Co filtering, if not, you get the full pack mainstream. Start a new conversation, preferably without registration, then you will notice that. Brave is no different, it can only get what Google, Bing & Co allow, you wonât be able to know more about it.
Haha my first question was who the fucks Leo, but then I clued in but ahhhâŚWow, what an absolute piece of garbage!! I guess! Its like i have already used it and yet i never have used it. Thats why it feels so familar. You know thats why all Ai chatbots have warnings and tell you to verify, that they can be wrong, and misleading! You should do some reading on how well AI is at being deceptive, intentionally deceptive. Go do some research on that, im sure your already familar because you already knew information that you were asking. So youve ran into this issue before (as i have to assume most people have). This isnt just a âLeoâ thing. No chatbot knows EVERHTHING and answers it EXACTLY the way EVERY user wants. Maybe its your fault for not emphasizing the important parts of your questions. Or you could have just you know, asked if it was sure and point out that you were looking for this instead of that. Then maybe you would have gotten the correct answers! So is it Leos fault? Or is it JPâs?
Really? something that you want and plan to do i guess would be the way i would define it off the top of my head. Im sure you can look up the official definition. Whats the point of asking the definition of intention?
Well just do a search for studies on LLMs intentionally being deceptive and read about some of the studies. I believe that there is one where a LLM was playing risk with human players and would tell one person they are on their side and then go to the other player and tell them where and when to attack. They were not given instructions to do this and did so on their own. Maybe you havent noticed that there are times when a LLM will completely fabricate a response and will only admit its mistake when questioned and then explain a more correct response that it should have given in the first place but didnt.
LLMs can be misleading, but I canât even think how one could go about attributing intent to them. The cases that you describe do not mean that the LLM is âadmittingâ anything, nor changing its mind. It doesnât remember the âthoughtsâ it had when it provided previous responses, so itâs impossible for it to even know why it provided those responses. Case in point: Leo allows you to edit Leoâs reponses; change one of the responses it gave you and make it be insulting towards you, and then ask it why did it insult you. It will simply accept that it insulted you, and apologize. Ask it the capital of France and when it replies change it to Rome and then ask it why LLMs sometimes fail to reply correctly to the most simple questions, and get a response of it attempting to justify its supposed mistake. Itâs clear that it cannot remember the reasons it said anything in the past. There is no singular mind on the LLM side across the whole conversation, itâs like thereâs a different mind deciding every single word it speaks, running and thinking independently of the minds that produced every single previous word. In the end the LLM doesnât even choose what it says, the temperature, top_n and other simialr parameters along with a softmax algorithm at the end stage are the ones rolling the dice to decide the actual words that it says. Those are outside the Neural Network part of the LLM. The LLM just analyzes the text given so far and says âhereâs a list of probabilities for the next wordâ. It has no intent by any reasonable definition that I can think of.
The case Iâm talking about was the AI was told to play the game of risk and it was INTENTIONALLY, ON ITS OWN, being deceptive, lying and manipulating people and did a really good job at doing it. It did it own its own. The point Iâm making is that itâs known, widely available information out there that ai chat bots CAN AND WILL provide inaccurate information and the answers that come from them CANNOT be taken at face value. So no one is shocked that it was limping itâs way through a conversation that it had with you. It had the information that you wanted but only provided it to you after you confronted it with its error. If you didnât know the answer you could have very well asked the question, got the answer and went on thinking that you had the correct answer and even went as so far as to tell other people with 100% assurance, believing that the information you have is completely accurate and others would believe you because you would sound so sure of yourself
You are repeating yourself without explaining anything.
Deception can emerge simply from choosing the next more probably word when your training data contains deception among other things. Thatâs no surprise.
I understand youâre observing behaviors that look intentional, but appearance of intention isnât the same as actual intention. What youâre describing sounds like emergent behavior from complex pattern matching, not conscious decision-making.
You keep saying thereâs intention but have failed to provide a definition of intention that works with LLMs. Iâm not saying that there isnât one, but tell it to me and explain what it means in the context of an LLM. You say it did it âon itâs ownâ but that means nothing. The neural netowork of the LLM isnât even choosing the next word âon itâs ownâ, itâs just ranking whatâs the next more probably word at every point in a string of text, thatâs all. Itâs a next word probability estimator. Thereâs no internal intention to do anything and the research you quote doesnât claim there is. If you ask it to provide a train-of-though which ends up containing deception then that only means that the NN gave a high probability to some word that end up turning that train-of-though âdeceptiveâ. How is that intentional though? Again, Iâm asking you, Iâm not saying it isnât, Iâm asking whatâs your definition of âintentâ for a NN. And no âwanting to do somethingâ doesnât cut it as a definition because it is absolutely not clear what it means for an LLM to âwantâ.
If you tell it to not be deceptive, it will most likely not be deceptive (if it does, retrain your model harder on these cases). If you tell it âpursue this goal at all costsâ youâll potentially get deception in train-of-thought LLMs as that is INDEED a very likely way for a text that started with âthis character will pursue his goal at any costâ to end up with deception being involved. The NN is doing itâs work properly in this case but I fail to see how this is intentional.
If they see deception when they gave it no instructions then they are merely witnessing the bias of their training data. Once more: no âintentionâ required.
Also typically in assistants like Leo the system prompt would imply or explicitly state that they should not be deceptive, so if, hypothetically, LLMs have âwantsâ, they certainly do not want to be deceptive when replying as an assistant.
You are repeating yourself without explaining anything.
Nah you just been reading the same messages over and over again.
I tried to provide you with the clear answer to your ridiculous comment by providing you with context from information that is openly available. Almost every single ai chatbot displays the warning. The rest of what Iâve said you can hunt down the articles for yourself and answer your questions.
Iâm not repeating myself; Iâm asking a specific question you havenât answered. You keep saying LLMs act âintentionallyâ but havenât defined what intention means for a system that works by predicting the next most probable word.
Yes, Iâm aware of the warnings on AI chatbots about potential inaccuracies. Thatâs not the point Iâm making. The point is that calling LLM behavior âintentionally deceptiveâ implies conscious decision-making that may not exist.
When you say âhunt down the articles yourselfâ, Iâm not asking for articles. Iâm asking for YOUR definition of what âintentionâ means when applied to neural networks. The studies you reference might show deceptive behavior emerging from LLMs, but that doesnât prove conscious intent to deceive.
This isnât a âridiculous commentâ, itâs a fundamental question about how we understand AI systems. If we canât clearly define what we mean by âintentionalâ behavior in LLMs, then weâre anthropomorphizing statistical processes.