22.7 on the Richter Scale?

I just asked Brave if Sudbury ON was on an active fault. My initial, unprompted reply gave me a whole bunch of information I didn’t ask for or need.

In general terms, I usually say, “oh whatever” and move on, but this time, a line caught my eye: “seismic activity has ranged from earthquakes ranging from 1.somethingIdon’tremember to 22.7 on the Richter Scale”.

22.7??

an earthquake of 22.7 would be so incredibly powerful it would probably involve the entire planet breaking up and the bits flying off into space.

Since the Death Star is fictional, and since it is highly unlikely that earth will do that kind of thing spontaneously, I argued the point with the AI and it finally apologized and fixed the mistake “moving forward” (which doesn’t mean anything long term.)

It concerns me that the AI is capable of making such errors, because (a) people who don’t know any better won’t catch it, and (b) such a stupid mistake is easy to fix anyway - check the extrapolation against the known scale before spewing it out to the user:

Min value: 1.3
Max value: 22.7
Scale: Richter

1.3 >= smallest possible or recorded value? YES
22.7 >= largest possible or recorded value? NO
— Find largest recorded value or re-extrapolate based on better data.
— Update extrapolation algorithm for futur Richter scale inquiries.

LLMs don’t work like your usual algorithms written by humans. So the easy fix you mention is not applicable. Instead you should report with the thumb down the message with the mistake. The idea is that Brave will look into these mistakes and I’m assuming they report the important ones to Anthropic or whomever trains the LLMs that was powering Leo when you had the bad reply. Then the mistake it made will be part of the training data for the next version of the LLM (not of Brave Browser) so that it will hopefully not make that mistake anymore.