I agree that adding the ability to report AI slop within Kagi Assistant would be valuable.
Yet, from a larger perspective, I wonder: what is the goal? Is it reasonable to believe that Kagi, with its limited number of users and searches per day, can realistically create (and maintain) an AI slop curiated blacklist that meaningfully improves the quality of Kagi Search results?
If the goal is not to create a blacklist, but instead to create a predictive model that calculates the probability that a webpage is AI slop, then reporting AI slop by Kagi users helps to generate a model development training dataset. However, Kagi has yet to disclose the extent to which this predictive approach is demonstrating early indications of success, and how it is (hopefully) overcoming significant challenges.