I just did a very technical, very specific search with the keyword "tutorial" in it. While the results seemed completely relevant and helpful, it would go a long way towards easing some anxiety and paranoia If I knew that Kagi made its best effort to detect and either filter out, tag, or give me a toggle on whether AI-generated content is part of the search results.
AI (LLM-based stochastic text generation, specifically) cheapens content farming a lot. Being stochastic, it's bound to have inaccuracies. That's not the problem: anything on the web will have inaccuracies. Posing content farming as legitimate information is the issue, as there's no accountability in such algorithms, no (possible?) transparency in relation to biases. Users may (and often do) have limited resources for further research and may trust the information as authoritative. It's a different scenario to question-and-answer tools like kagi is investing on, for instance, which may help me with keywords for further research.
There may also be ethical and environmental concerns: not every LLM/AI project is focused on reducing its carbon footprint or minimizing their models, like (IIRC) some of the Hugging Face research projects. The general mindset in the field seems to be that eventually the exponentially-growing models will cross a threshold and the new capabilities will be worth it. Others think research will eventually reach the top of the S-curve, and we may not realize until very late.
As far as I know, this is not a feature that is provided on any search engine or browser. I'm aware of research projects to detect AI-generated text and images. OpenAI is providing a feature like that.
A small visual aid suggesting that a search result looks AI-generated would help me decide if the article is even worth looking at.