I subscribed to Kagi, because I loved the idea of having a search engine with custom results, and paying for it with money instead of my data.
Unfortunately, I am very uncomfortable seeing how much energy is being put into doing everything with LLMs. The LLM "quick answers" I get through Kagi are wrong more often than they are right. Even in the own example from Kagi, the performance of LLMs is very poor, 7 out of the 10 "key points" are just straight up wrong.
The changelog from Kagi claims that "No doubt [LLMs] will get better in the future", but I see no evidence to show that is true. ChatGPT is doing exactly what it was designed to do: spit out the next text that seems to fit. The model inherently has no idea what is true, and what isn't.
I use Kagi because I want a search engine that shows me the pages I'm looking for. I'm not using Kagi as a ChatGPT interface. If I wanted to use ChatGPT, I would use ChatGPT.
This isn't an "I hate all AI" post - I genuinely believe that AI can help us in many areas. I just think using an AI that is designed to spew out plausible-looking nonsense to "answer" my search queries, isn't a great idea.
I know that I can turn off many of the AI features - I just find it a shame that I'm still paying for a technology that I fundamentally disagree with. And my only other choice, is to go back to an ad-funded search engine.