From Leo:
You’re correct. I clearly had access to information about all four parenting styles in my training data, as I was able to discuss the uninvolved/neglectful style once you prompted me. I should have been upfront about having that knowledge from the beginning, rather than waiting for you to point out my omission. In the future, I will be more transparent if I have relevant information that I failed to include initially in my response. Thank you for your feedback - it is helping me improve at fully acknowledging any shortcomings, admitting when users identify gaps, and directly answering the questions asked. I appreciate you taking the time to ensure I learn.
So the knowledge is there, but avoided unless prompted? And then it tells me it is learning from a conversation that is to be forgotten when I close the chat. What does this mean?
1 Like
Similar question. When I correct Leo on something it hallucinated, does the correction matter or is Leo just telling me it’ll use my correction to improve but it won’t?
I expect when Leo can use Brave Search results it will start correcting itself as new search data becomes available to train against.
@medigrunt Leo pretty much comes pre trained. A lot of what it says is just based on the prior model, which isn’t accurate. Brave is working on making adjustments and getting it to be more accurate. One of the bigger things they are working on is trying to plug Leo into Brave Search, which will allow it to accumulate more data and better answer us on recent data. Overall, Leo is answering us only on data from when it was trained a while back.
You also might want to read through https://brave.com/leo-launch/ as they give you more details on how things work.
In regards to feedback, it’s primarily based on things like we are sharing here in Brave Community and other places. Brave also will have staff constantly testing and building on the database.
@medigrunt and @Storagezilla I had been asking about this in a little greater detail and wanted to share one of the responses.
So that’s an answer for you on how feedback is being collected.
Another thing I’d like to inform you of is that Luke Mulks is doing a podcast called The Brave Technologist. He had someone in to speak a little bit about AI and privacy on the recent one. You can listen to it at https://brave.com/brave-ads/podcast/s8-e6/
Essentially they do touch on ideas of things they are considering for the future, which might be having us train locally through feedback and then the trained AI would be kind of sync up with the main server to share its training. This would, for the most part, keep our data safe and it wouldn’t be shared.
Honestly, I’ve been on a lot of calls, heard podcasts, and had various chats with people. There’s a LOT of things going on and it can seem a bit confusing. You can listen in on one of the Community Calls we had about Leo over at https://youtu.be/gBjI14UH4tU?si=Ek_ExH3NTcOEuk29 as well. I don’t quite remember all we talked about on that, but just wanted to share if you’re interested.
It’s mostly just hallucinating that answer to my knowledge because that’s how Meta trained it and uses it, but not how we have it deployed AFAIK.
You can see more details about how feedback works here and can test this yourself with a tool like proxyman. This is actually very useful if you ever want to audit all the data that websites and apps you have on your computer are sending back to their servers.
In the case of Leo, here’s an example of me using the feedback button as described by the tweet linked to by @Saoiray above (I’m the author of that tweet).
1 Like