I asked the ai of brave to search for me how many years have been since the industrial revolution and its porcentage compared to the total life of earth. It told me it was 0.004%, but instead it is 0.0000056%. Just letting u know
@FSGHHTT,
AI — that is all AI, not just Leo — have the capacity to “hallucinate” and making things up/get things incorrect. This is a trait of the way LLMs work. Taken from the Leo FAQ:
Yes, it does. Hallucinations are an intrinsic challenge in how LLMs work. Sometimes regenerating the answer may help. Always double-check Leo’s responses (for example, try the same query on Brave Search) before quoting them.
Are you sure, @FSGHHTT? lol.
Humor aside, keep in mind all AI is in development. Though also have to ask, which AI did you use? Was it the one on Brave Search or did you use Leo in the browser? If Leo, you have multiple choices for language models.
Each language model has things it’s better at than others. You can even find some articles like https://www.nytimes.com/2024/07/23/technology/ai-chatbots-chatgpt-math.html which talks about how AI can be bad at math, for example. While everyone is constantly pushing to train AI to be better and get more accurate answers