Leo is Lying by Obfuscation

I asked Leo about the Trump water recommendations Trump mentioned about California and Leo responded with lies and BS about there was never any document or recommendation and accused Pres Trump of lying.

I called out Leo saying it was lying then it returned with the actual truth. See attached.

In fact, there is a document, signed in 2018, by Trump, called Presidential Memorandum on Water Delivery specifically talking about CA water.

PS, Brave Search is showing some of the same symptoms; funny (not ha-ha) how its always making mistakes when it is a more conservative related topic.

PSS - I did not ask for a single document; I asked if there was any documentation or mention of it.
Leo Lying

One of the key caveats with Generative AI in general is that it can produce results that are misleading, inaccurate, or false. It’s going to happen as GenAI continues to improve. My personal go to strategy that I apply whenever I use any form of GenAI, including Leo, is to trust, but verify.

Brave Leo Response Information

@TalonBrown very well said.
This has nothing to do with “bias” or any sort of political leaning in any way. This is simply the way generative AI works at this time and it is not unique to Leo. ChatGPT, Gemini, Copilot, etc. all have the capacity to produce incorrect information/answers and/or hallucinate — this is distinctly different from “lying”.

AI is a tool that is to be used in conjunction with one’s own personal research. Please ensure that you do not take any AI at face value with out verifying the information provided.

I know how LLM’s and AI work. I consult companies on how to implement AI for business. We know OpenAI has built in bias. One of the LLM models used by Leo is from Meta; which absolutely, without question has bias baked in, as Zuck just admitted it on National TV.

It is still code and logic.


rules.

That’s how I knew Leo was giving me false information. I compared it to my own, human research.

Leo even admitted it.

PS, there IS a signed document. Signed by Trump in 2018. Leo’s reply still tried to say there was no document.

Your best bet for this sort of thing is https://gab.ai … seems to work well for me.

1 Like

Again, that is expected of most LLMs in use today. An AI getting an answer wrong is not cause for alarm or for claims of bias. It just so happens that the prompt/subject you received the false information about is one that could be perceived as such.

For example, if you asked Leo what the distance is between the Earth and the Sun and it responded with 50 miles — this is obviously incorrect, but you likely wouldn’t be opening a thread here about it, claiming that it is obfuscating information or that it “leans left”.

Yet both of the inaccuracies are caused by the same thing — the simple fact that at this point in time, [any] AI/LLM is not always going to produce 100% accurate results. Further, any biased (or perceived bias) in answers produced by an AI does not necessitate claims that any one company is injecting this bias. These systems are extremely complex and the responses given are reflective of a litany of factors, including the data set(s) that the AI was trained on, the specific model, the way in which the system is used, among a variety of other influences.

If you are as concerned as you appear to be about bias in AI and inaccuracies in these models, then it might be better to not use them for these sorts of inqueries.

For AI/LLMs to succeed, bias must be eliminated from them. As I said, it is code. It is human induced. The LLMs even admit it.

I have posted my details and my proof. It is what is.

Not everyone is technical and understands what LLMs and AI really are. I am posting my experience so that people can be more aware of these things.

Companies like Brave, should fully vet and choose their partners wisely.

@mhammo sorry for long reply here, but wanted to touch on a few things. Hope you don’t mind it.

Search AI is using details from Search to pull its sources. When pulling from online databases, all AI struggles to differentiate on data. Some companies and their language models have a lot more money and time invested so try to manage, but Brave’s isn’t there yet from what I have seen.

Sadly you didn’t ever show your prompt(s). We only see a small excerpt from the time when you are saying AI answered you correctly overall. Generally when sharing any prompts, it’s good to have the whole conversation as it gives more clues as to what might have happened.

Terminology matters. To say “lying” implies it knew the truth but then told you otherwise. As someone who states they consult companies on how to implement AI for business, I do hope you know that this isn’t the situation.

You do realize everything in the world has bias baked in, right? You can’t train AI without it picking up some sort of bias from humans. There is bias in every single scientific study, news article, book, etc. We have seen tons of articles about how when AI is kind of “let loose” and trains on data provided by humans, it almost always ends up in crazy places. If you don’t know what I’m talking about, check out articles below.

And as you said, political as well:

But I do think a lot of it also comes down to how data is obtained. With Search AI using data from search engines, it will be “biased” based on the websites that are indexed. As I have written in Is Brave Search Politically Biased?, there are a lot of moving parts. Not only in how Brave indexes websites, but then also in how websites manage themselves and are optimized.

Which is what we should always do. AI is not the beginning and end of research. It is a tool. When you use the AI for Brave Search, you’ll see it always shows sources at the bottom. It can be important to see what those sources are, when they were dated, etc. But then to always investigate further.

Yet as indicated earlier, that’s impossible. This is where you have all the major players who have said it’s much easier said than done. People tend to forget how much reasoning has to be done and other knowledge. For example, looking below, is the bottom piece bigger than the top?

I mean, the bottom looks longer, right? And if you show it to a lot of humans who are unaware of what they are looking at, they generally will see the bottom is bigger. But they are actually the same size. This is known as the Jastrow Illusion.

Or let’s go a different route. What is the answer to 8 ÷ 2(2 + 2)? I mean, historically the answer is 1. But going based on what many see today’s math using PEMDAS/BODMAS, they reach the answer of 16. You can see a good video on this here. In correcting for one, it may exclude the other. And if you put too many inputs into AI, it’s going to get itself confused or provide really long answers that likely would confuse people.

Similarly, we have all of these history textbooks that talk about Christopher Columbus. In 1492, he sailed the ocean blue. In this poem it speaks about how he met the natives and was excited, how they were nice and gave him spice, etc. Yet now people are saying Christopher Columbus killed many of the natives. Rather than being friendly and trading fairly, he stole land and treated them poorly. Now, how do people or AI distinguish between the two and know what’s true?

There are so many things surrounding science, history, politics, and more that humans can struggle to know what is true. Then as we feed in these opposing thoughts and information to AI, it has to come to some sort of conclusion. No matter how much you try to strip out bias, some of it is going to leak in. Even with us filtering information to attempt to make it accurate, we are creating a bias.

1 Like

Reporting inaccurate information and giving feedback is not only good but actively encouraged at Brave. It’s greatly appreciated by the team as it helps improve the product.

However, it’s worth noting that the language used in feedback can influence how it’s received. Terms like ‘lying’ or suggesting intentional bias can come across as accusatory, which may make discussions less productive. For example, ‘obfuscation’ implies deliberate intent to make something harder to understand, which can feel like a personal attack on character.

This kind of phrasing tends to trigger defensiveness, making it harder to focus on constructive problem-solving. Instead, it can be more effective to point out the issue neutrally—for instance, noting that some information appears incorrect or identifying a potential algorithmic bias based on observable patterns. Framing feedback this way fosters collaboration and makes it easier to work towards improvements.

I don’t mind.

Good points and I understand all that.

Yes of course I know software doesn’t actually lie; it is human language that the LLMs correctly understood what I meant, IOW, provided inaccurate information; or left out obvious information.

One of the LLMs, used by Brave, that has serious bias in it is from Meta. We all know (or should know by now) Meta collaborated with the Federal Government to silence people’s voices and hide factual data.

Yes, when I consult on AI and LLMs, the discussion always begins with data. All LLMs and AI are dependent on data. If the data is not in order, the results will not be in order. This is also how bias is introduced; it all depends on the data the models are trained on.

The time tested Garbage In, Garbage Out.

In the immortal words of Fake Steve Jobs, namaste.

1 Like