Leo is bad on video summaries

I decided to give Bing a chance and then compared to Leo. There was a substantial difference in their summary of the same video. At least there’s been some improvements with Leo, but it still struggles. In some cases, it’s unable to summarize videos and even lies about what is there.

As it stands, Bing is the clear winner and Leo is absolutely horrible…at least for recognizing what is going on with YouTube videos and providing summaries.

Example 1: Community Call summary

BRAVE/LEO

EDGE/BING

Example 2: Episode Summary: Border Security: Australia

EDGE/BING

BRAVE/LEO

Good write up here.
It is worth mentioning that the Premium version using the Claude model was able to (at least) provide a more detailed summary when prompted (img one was the initial response, image two is after being prompted for more info):

That said, it is certainly lacking when compared to ChatGPT. Will submit this for feedback.

2 Likes

You may also want to note that since the new system for summarizing pages was implemented (the one that seems to not differentiate much between videos and generic pages), video summaries very often waste a whole bullet item to say “This page contains a transcript of a video delimited by timestamps” or something similar. My guess is that because you are asking the AI to simply “summarize the page”, instead of asking it to “summarize the video given the transcript” it often thinks it’s worth mentioning that it’s summarizing a video transcript. It would be nice if you could fix this.

Also, while you’re at it, I believe you’d save a LOT of time if you allowed users to customize all AI prompts so that we can start having a forum for exchanging prompts and then you can pick the best ones from there, essentially crowdsourcing the optimization.