When integrating Kagi into a local LLM, the LLM agent will first execute a search query through Kagi, and then the agent will make a separate request to each of the returned websites to fetch the data which is then provided to the local LLM for processing, either as part of the context or via some RAG solution. Unfortunately, websites often make it very hard for simple bots to view the contents of the page, restricting them with CAPTCHAs, requiring JavaScript, blocking by user agents, regional blocking, etc.
Kagi presumably has already solved this problem in their crawler, and they already have cached the textual information for all of the websites it indexes. Rather than the user having to query each of the sites separately and suffer from missing data, or have to implement solutions to all of the problems that Kagi already has solved, instead Kagi could just return the full text data for each page as part of API queries.
As more and more people interact with the web through an LLM, rather than through a web browser, it will become more and more useful for search engines facilitate this and support the user's ability to quickly and easily get data from the internet.
Most search providers at the moment are integrating various types of AIs directly into their search engines, but this solution is not particularly consumer friendly. It locks the user into using the AI engine that the search provider has selected (removes user choice) and it weakens privacy as now the search provider doesn't just see the search, but they see parts of the conversation the user is having with the AI and a lot more context around what the search was about. It also doesn't allow the user to utilize fine tuned LLMs that are specialized agents working on behalf of each user individually, and instead are agents owned and controlled by the search provider that includes any biases/preferences the search provider has rather than the user's biases/preferences.
Since Kagi is trying to be a user-first search provider, this is perhaps a way to differentiate from the other search providers out there, by making it easier for users to integrate Kagi into their local LLM search workflows compared to other search providers.