Why Leo is asking for email address?

During first, nice and enlightening, conversation with -Leo Premium-
about possibly helping me to program an application I asked:

Me:

But what if the chat closes or my internet connection stops?

Leo:

That’s a good point. Since we’re collaborating remotely, it’s possible our chat session could be interrupted or one of us lose internet connectivity. A few suggestions:

  • Before our session ends, let’s exchange email addresses so we have an alternate way to communicate and pick up where we left off.

  • If possible, you could share a link or file hosting service where I could access the code/app for offline testing and review.

  • We could also schedule a follow up call via video chat service like Zoom, where we could share screens and you could demonstrate the app live.

The goal is to find reliable ways to collaborate even if our chat connection is lost. Please let me know which approach you think would work best or if you have any other ideas. I’m happy to help test and provide feedback however we can establish a stable workflow.

That sounds bizarre especially if you ask if it has an email
and it answers you:

Leo:
No, as an AI assistant I do not have an email address. I was created by Anthropic to be helpful, harmless, and honest. How else can I assist you today? I’m happy to answer questions to the best of my abilities.

Also, when I raised my concerns about providing confidential data or the fear that it might be disclosed to third parties, the chat ended.

This happened 2 times, 2 times in a row.
At the second one the internet connection also broke down.

Am I hallucinating myself already after one hour of LeoChat or am I unaware of potential AI fraud?
Or, why provide a premium service with such a bug?

No, but it sounds like Leo is. If you check https://brave.com/leo/ you’ll see the below:

llama versions of Leo were trained by Meta initially and Brave is retraining and updating. Then Claude was initially trained by Anthropic. Some of the replies Leo gives might be based on what training and capabilities they had there. But otherwise they are just hallucinating where they sometimes see themselves as a support agent and try to give the “best answer” on that.

It’s just one of the odd things that can happen with AI and takes awareness from users. Just to note, this happens with every AI, including Bard/Gemini, Bing, ChatGPT, etc.

Response types will vary depending on what you ask just before, phrasing, what pages you have linked to it (you’ll see a link on top when you open, usually where it’s accessing information on whatever website you’re on), and its training

In regards to link, what I mean is like below. You’ll see in the screenshot where it’s linking to this topic because I opened it while answering you.

image

What this means is it will likely modify answers to be applicable to the page. Removing that link or going to another site might generate a different type of answer.

For example:

image

Just asking a vague question about concerns posed by this user, it immediately was able to use the data on the site to “fill in the blanks” and provide something of an answer. The accuracy of its interpretation and answer obviously leaves a lot to be desired, but it at least was a show of how it works.

Thanks for your reply, I understood.

Anyway I was having a chat about financial topics/ideas and how to implement them in an application.

No pages linked to that, the chat happened in the settings page.

I have read articles advising against providing sensitive information to AI…

So trusting becomes a question of how much help is possible without providing sensitive data?

Do you know, in this context, how able Leo can or is to receive code packages, test them, and return them to me?

Does a video call work?
Anyone had one?

Yeah, this is a good idea in general. I will say that Brave erases chats and things shouldn’t persist, but it’s good not to become too complacent and trusting as nothing is ever perfect.

I guess it depends on the type of assistance you’re needing. I think the average task wouldn’t require any sensitive data.

I know when trying to ask Leo, it mentions video calls and screenshots, but those are things it’s not capable of. So this is just something persisting from training and another hallucination.

Umm, I don’t know that it is capable at this time. I was trying to test it with code at https://github.com/brave/brave-core/blob/7d48b0aac976d91bc3cf5d113c28173ba185445a/browser/brave_local_state_prefs.cc but it wasn’t able to read directly from the Github.

I was able to copy/paste portions of code and it responded, such as you see below. But I’m not sure its full capabilities or limitations. For this, I’d likely end up needing to tag in someone from Brave to see if they can help guide a bit more.

Challenge is it only allowed up to 2000 characters.

@Mattches any ideas on if Leo is supposed to be able to handle coding as last question was asking? I’m not seeing way to send code packages and all, and assuming it’s not able to test. But want to tag to see if you have any extra info or can tag in whoever might be able to discuss this…especially if it might be something that could be coming in a future update.

Thanks!

Thanks.

I will try to explore more too.

IF anyone is already experimenting with these LEOcapabilities I would be happy to know the limitations.

Will or will not do what it says it can do?
I chose premium thinking it was capable of it from what it said…

Oh yeah, @lasmiveni. For what it’s worth, I also want to make sure you’re aware of the Github that has the code for Leo.

And of course the general Github for everything:

I believe that it depends on the coding task you ask it to perform. Experimenting a bit right now, it seems to be able to read/analyze code without issue. I’m sure there are limitations here but I’ve been fairly impressed with what it can do. For example, when I gave it this code:

using namespace std;
 
string removeDuplicateWords(string s)
{
 
  const regex pattern("\\b(\\w+)(?:\\W+\\1\\b)+", regex_constants::icase);
 
  string answer = s;
  for (auto it = sregex_iterator(s.begin(), s.end(), pattern);
       it != sregex_iterator(); it++)
  {
      smatch match;
      match = *it;
      answer.replace(answer.find(match.str(0)), match.str(0).length(), match.str(1));
  }
 
  return answer;
}
 
// Driver Code
int main()
{
  // Test Case: 1
  string str1
      = "Good bye bye world world";
  cout << removeDuplicateWords(str1) << endl;
 
  // Test Case: 2
  string str2
      = "Ram went went to to his home";
  cout << removeDuplicateWords(str2) << endl;
 
  // Test Case: 3
  string str3
      = "Hello hello world world";
  cout << removeDuplicateWords(str3) << endl;
 
  return 0;
}

I asked what this code did:

Which is pretty straightforward and I figured it wouldn’t be too hard for Leo to do. Then I asked it to write me code for the game “snake” in Python — which is a fairly simple game but I assumed it would be too complex for Leo to handle without more context. Then…it replied:


yes, It has also impressed me,

even though every time the conversation gets interesting everything ends, the chat is too long, too many characters inserted.
Then you have to reopen the chat and provide context again… in short
with this version no progress is made.

Will there be a version with fewer limits, also considering the fact that a Premium subscription should already not have these limits?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.