Hey everyone! Kevin here, Kagi's Head of Education. @npeng raises important points that I wanted to address.
First, thanks for the detailed feedback. Let me clarify our actual goal: Slop Detective isn't designed to train accurate AI detectors. It's designed to help kids develop a habit of scrutiny for all content they encounter online.
Research shows kids already struggle to identify advertising (https://pmc.ncbi.nlm.nih.gov/articles/PMC9724173/), and adults struggle with misinformation broadly. The skill we're teaching (pause, look closely, ask questions) transfers across all these contexts, not just AI detection.
Regarding "simple" examples, this is intentional pedagogical design. You don't teach chess using grandmaster games. When teaching younger kids, you start with obvious cases so they can practice the behavior without getting overwhelmed. As they develop the scrutiny habit, complexity can increase.
Imagine we used the best images from frontier models and asked kids to spot fakes from the beginning. This plays directly into the "Liar's Dividend" when people stop trusting anything because detection feels impossible. If we showed kids SOTA AI that's indistinguishable from human work, we'd teach them "it’s impossible to tell so why bother?" That's when they become most vulnerable to accepting fake content as real, or what I think is worse… dismissing real content as fake. IMO, this is the REAL potential harm.
Regarding potential harm, kids playing casually won't walk away thinking they're experts nor do we at any point tell them they will. It’s a simple fun activity for them that through the testing that I have done in the 8-10 yo range, accomplishes it’s goal. The #1 feedback we’ve received is that “some are easy and some are really hard to tell.”
Other details we’ll refine over the next days, but I did want to address your points about misleading and harm to kids.
Happy to elaborate on anything that isn’t clear.
Links:
NPR's AI-Slop quiz -- https://www.npr.org/2025/11/30/nx-s1-5610951/fake-ai-videos-slop-quiz
Liar's Dividend -- https://www.californialawreview.org/print/deep-fakes-a-looming-challenge-for-privacy-democracy-and-national-security