A reminder that Leo offers access to various Large Language Models. These models don’t think, or know anything. They generate mathematically-determined continuations to the text you feed into them. If you ask questions based on the model’s training data, you’ll get a response that is likely to correlate highly to the question asked.
As an example, the United States’ National Anthem was undoubtedly part of the Mixtral (our default model as of this date) training data. As such, if I pass “oh say can you see” to the model, I would expect it to return something related to the National Anthem:
You shared that you queried the model about Closed-end Funds. I went ahead and asked Mixtral to tell me a bit more about this topic. The answer appears to be very accurate, based on some cursory browsing of Brave Search for definitions/explanations elsewhere:
Leo will not always get it right, however. There are ways to improve its output, which we will be working towards in the future. For example, when I personally wanted to know more about Closed-end Funds, I went to Brave Search. Leo will be able to do the same thing in the not-too-distant future, rather than having to rely exclusively on its own training data.
The key take-away here is that the models don’t lie, or attempt to deceive. They merely attempt to continue the sequence of characters that were fed to them. They do this not from a place of reasoning about facts and reality, but rather based on the text and material upon which they were trained. You can get the model to engage in all sorts of fanciful conversations, which is why you should always be cautious about its output.
Once Leo has access to Brave Search, it will be able to share results from the Web, which will help it to avoid all sorts of peculiar responses. We currently use AI on Brave Search, and augment its responses with citations, so that the accuracy of its output can be confirmed. We plan to do something similar with Leo as well.