Vlad I'm sorry, I know this is an old message, but I feel like I need to respond to this specifically because the whole argument feels particularly egregious.
Even then, I believe this is the best way to consume news. Have you tried reading a news website recently? Between ads, tracking, low-quality journalism and opinionated reporting, it's impossible to consume. The way we approach it in Kagi News not only removes all the noise and sticks to the facts, but also offers perspectives on events from diagonally different sources. That's all I care about from journalism, and it would take hours to manually synthesize for just one event.
An LLM summarizing news sources does not remove any noise, if anything it adds to it. The act of summarizing anything correctly involves human input in understanding context and thus what is important and core to the information being presented and what are details that can be left out. To honestly believe an LLM is capable of doing that consistently by continuously guessing what the next most likely word is is observably, demonstrably untrue, especially given the nuance of human politics that generate the news in the first place. Daily news even.
How does an LLM remove the "noise" of low-quality journalism and opinionated reporting when both those things are necessarily present in the sources it is pulling from to summarize? Is Kagi's model somehow omniscient to the point it can stick to facts it did not observe nor experience by summarizing what other humans are saying by their already subjective and opinionated lens? That sounds extremely unlikely. Offering links to all perspectives of any given discussion is what a news aggregator does, and one can read these different perspectives for themselves, this technology of "looking for different news sources" has been around for a while and I fail to see how an LLM adding the extra step of summarizing is any real benefit, especially when the summarizing is often faulty.
Vlad Point to an actual example of bullshit created by Kagi News.
Just open the feedback page for Kagi News right now. Several complaints about untrustworthy sources, innacurate summaries, context pictures being show below headlines in ways that lead to misunderstandings of interpretation. Users having to consistently point those very complex and subjective things out for you to somehow patch out does not corroborate your claim that the process of following the news can be reliably automated. It does not have intrusive ads, I'll give you that.
Turns out journalism, the process of relaying information and the inherent human bias present when reporting the facts as observed by our collective subjective experiences in an interconnected world with all sorts of sets of incentives are all actually extremelly complex and nuanced things that can't be reliably summarized by an algorithm that has no capacity to truly understand the context and meaning of what it is regurgitating, nor can it present anything in an "objective" way considering the pool from which it pulls its "understanding" is already biased to begin with.
You can't expel bias, subjectivity or any otherwise perceived human faults from the news (or anything as grand in scope, really) with an LLM, Vlad. You are welcome to try, but I'd be extremely glad if you did so away from the product I'm paying for, preferrebly with a different mission statement from "humanizing the web", because there seems to be a fundamental misunderstanding on what the word "humanize" means.
This bullheaded insistence on implementing LLM features (note I'm specifically referring to LLMs, before the "AI is other things that have been used forever" argument turns around, I know, everyone knows) to what's supposed to be a search engine makes me increasingly distrustful of the choices the company is making, not to mention the claims regarding how much it's costing you or where exactly my money is being sent. You and your team seem to be really into LLMs, it is pretty fascinating tech, but I'm not paying you pursue your personal interests, I'm paying you for providing me with good internet search, and the last thing I need is a large language model trying to do my search for me or resources I contribute to being directed toward something like Kagi News, which I unironically believe to be extremely dangerous, no matter your intentions.