After reading an article about Leo, I decided to try it this morning. I wasted a hour and a half of my time. I asked it some basic questions about closed-end funds and received blatant lies. The reason I can say that it was purposefully lying is because after many questions, and it constantly apologizing for making mistakes, it finally admitted that it was just providing “fictional” answers “created for the purpose of the previous questions”.
This is a useless and dangerous tool. Also a blatant money grab. I used to think highly of the Brave browser but now think otherwise. Very sad.
@Knk,
Thanks for the feedback. Can you please share the transcript or screenshots of any conversations like this you have/initiate with Leo? It would also be good to know what model you were using when you saw this.
@Knk,
Thank you — can you also share which model you were using when this conversation occurred?
A reminder that Leo offers access to various Large Language Models. These models don’t think, or know anything. They generate mathematically-determined continuations to the text you feed into them. If you ask questions based on the model’s training data, you’ll get a response that is likely to correlate highly to the question asked.
As an example, the United States’ National Anthem was undoubtedly part of the Mixtral (our default model as of this date) training data. As such, if I pass “oh say can you see” to the model, I would expect it to return something related to the National Anthem:
You shared that you queried the model about Closed-end Funds. I went ahead and asked Mixtral to tell me a bit more about this topic. The answer appears to be very accurate, based on some cursory browsing of Brave Search for definitions/explanations elsewhere:
Leo will not always get it right, however. There are ways to improve its output, which we will be working towards in the future. For example, when I personally wanted to know more about Closed-end Funds, I went to Brave Search. Leo will be able to do the same thing in the not-too-distant future, rather than having to rely exclusively on its own training data.
The key take-away here is that the models don’t lie, or attempt to deceive. They merely attempt to continue the sequence of characters that were fed to them. They do this not from a place of reasoning about facts and reality, but rather based on the text and material upon which they were trained. You can get the model to engage in all sorts of fanciful conversations, which is why you should always be cautious about its output.
Once Leo has access to Brave Search, it will be able to share results from the Web, which will help it to avoid all sorts of peculiar responses. We currently use AI on Brave Search, and augment its responses with citations, so that the accuracy of its output can be confirmed. We plan to do something similar with Leo as well.
I started off with the stock one they let you start with then it told me I ran out of free questions and then I switched to the free one.
I had the same issue. I asked “leo” several questions and got back alot of nosense. All the answers were lies. I just dont know why brave programers are pushing this thing on us. It should be removable it is probobly just spyware. Should ask it for a picture of founding fathers and see how racist it is. Maybe they should change the name of the browser to little chrome. Huge fail for brave . It maybe time to flush it.
It’s not spyware and it’s not being pushed on you — you do not have to use the service. Further, you can hide any Leo icons in the UI by going to Settings --> Leo
:
For anyone curious about inaccurate answers, please see @sampson response above:
If it only spits out what its fed then it is not intelligence. So stop billing it as such. Just make your search engine better and ditch the gimmics.
I think the important issue here is whether the fictional answers were given because the model was instructed not to provide financial advice – or did it occur for some other reason. If you want the product to serve you better, you need to provide more detailed feedback.
I would have to disagree there – the usefulness depends on the context, and the quality of A.I. tools is steadily improving.
Even large models like GPT-4 will sometimes make mistakes — and the cost of providing free access to the public is several million dollars per month. The models that Brave uses are small by comparison because the cost of providing free access to large language models is enormous, so the quality of responses will vary accordingly, and these products will require more training & optimization. I think Brave’s commitment to privacy makes it worthwhile to be patient and cooperative here.
Brave’s language models have not been instructed to lie, but there is a trade-off between creativity & accuracy: If it’s biased too far towards creativity, it may give more inaccurate answers. If it’s biased too far towards accuracy, it may fail to provide closely-related information that you would find useful & relevant. There is a lot of fine-tuning and optimization that needs to be done, and this is not a Brave-specific issue… it applies to all chatbots.
The work is just beginning, and A.I. is not going away just because some people don’t like it.
That comment is not really relevant here – but just to be accurate, artificial intelligence is the simulation of human intelligence processes by machines.
I realize these are just AI bots. I realize people want to make money. Provide a product that is not a complete failure. Ford doesn’t put out new vehicles on the showroom floor without engines and wheels for obvious reasons. Yet, all of these so called “AI” from all the companies are failures and liars.
I consider them dangerous because they are people out there who believe everything that AI vomits. There are also people using these AI for nefarious purposes. We have recently had a lawyer in BC using it to create fake case law in an actual court proceeding.
I always considered Brave to be helping the “little guy”. That’s why it really irked me to find it trying sell something so flawed.
Just tell everyone the truth, it’s just an entertaining tool designed to figure out what the next word will be. It is not there to reliably and consistently assist you.
It is not there to reliably and consistently assist you.
For those who understand the limitations and know how to apply these tools correctly, A.I. search bots do “reliably and consistently” save time, even if every result is not satisfactory.
Please seen @sampson’s response for more information about how how LLMs work and generate answers to your prompts: