I just asked Brave if Sudbury ON was on an active fault. My initial, unprompted reply gave me a whole bunch of information I didn’t ask for or need.
In general terms, I usually say, “oh whatever” and move on, but this time, a line caught my eye: “seismic activity has ranged from earthquakes ranging from 1.somethingIdon’tremember to 22.7 on the Richter Scale”.
22.7??
an earthquake of 22.7 would be so incredibly powerful it would probably involve the entire planet breaking up and the bits flying off into space.
Since the Death Star is fictional, and since it is highly unlikely that earth will do that kind of thing spontaneously, I argued the point with the AI and it finally apologized and fixed the mistake “moving forward” (which doesn’t mean anything long term.)
It concerns me that the AI is capable of making such errors, because (a) people who don’t know any better won’t catch it, and (b) such a stupid mistake is easy to fix anyway - check the extrapolation against the known scale before spewing it out to the user:
Min value: 1.3
Max value: 22.7
Scale: Richter
1.3 >= smallest possible or recorded value? YES
22.7 >= largest possible or recorded value? NO
— Find largest recorded value or re-extrapolate based on better data.
— Update extrapolation algorithm for futur Richter scale inquiries.