I really like the idea of Slop Detective, as it is something that both kids and adults really need to learn. But in its current iteration I don't think it quite does the job. The main problem I see, is that it relies too much on the ability to spot technical features rather than context.
For example the AI music examples can easily be dismissed as just crappy, low effort, mass produced music that existed well before AI music ever did. With careful listening you can tell things like the voices not being natural, but probably in 6 months or a year, that won't be the case any more anyway.
And a consistent idea being pushed is that human written text is generally true, whereas LLM responses are generally overly verbose/technical, or false. In reality, humans are often wrong, often hallucinate themselves (false memories, misunderstanding something they thought they understood and so on) and even lie, not that different to AI.
It also misses the most important factor, at least in my opinion, that AI slop generally has two purposes. The first is to draw user attention, get clicks, sales, promote crappy products and the like, and the second is political/social manipulation.
I think a tool like Slop Detective could be very useful at helping people of all ages with this, but really it needs to be a more generalized data evaluation/critical thinking tool than just a basic "Does this look like AI or not" kind of thing.
AFAIK nothing like that currently exists, and most education systems focus on teaching kids to mindlessly believe anything they are told and not evaluate data, just accept truth from authority, so it is definitely needed.
As for specifics of how it could be improved. For example you could show a front page for a fake website, promoting a new product like a phone, or a subscription service. The easy level would use AI generated images and weird, meaningless catch phrases like "Level up your gaming levels!" and so on, and then when the user evaluates it, it could explain and highlight the specific points that show it is fake.
But then on the most difficult level, you could have the same page, with real photos rather than AI, and with more convincing text, like a human created overall layout and theme, with only AI product descriptions, and have it simply promote features that could seem plausible, but aren't really. Like "Full iPhone AND Android app compatibility!"
And the user actually would either need to know that's effectively impossible and probably illegal to make, or look it up themselves and work it out.
Then there could be for example a summary of a news article and a link to the real article. And the summary would be made intentionally misleading, like AI summaries (and human written headlines/summaries) often are.
Or there could simply be a series of 5 Reddit posts about a controversial topic, with all different statements about what happened, only one of which is factually accurate, and the user has to determine which one is factually correct. Obviously this would need to be done only with topics where there can be a clear, factual answer like "John Doe said in an Interview that he wants to murder all goats." "Actually John Doe said Jane Doe wants to murder all goats, and John wants to save all goats." "Actually it was Paul Doe who wanted to murder goats, John was just quoting Paul." Etc. Something where you can find an original source and prove 100% what actually happened.
And the answers at the end could explain with examples of where a person could look to validate such information with reasonable accuracy, and explain why that method of validation is viable. So for example saying, "Well no, blah political theory is obviously wrong because Wikipedia says so." or because some government controlled website says so, doesn't mean it is true.
I understand this is going a bit beyond specifically AI slop. But I think to actually be genuinely useful, that is necessary. As I mentioned before, the real problem with AI slop, is how it is meant to manipulate people. And humans were doing that perfectly well before AI came about to automate the process. So anything that helps people detect AI Slop effectively, should also be just as effective at teaching a person to detect human slop.