Hello,
I'm writing based on the conversation I had on Discord about this feature (from this thread https://discord.com/channels/1256077108111868035/1407741204870729859)
My core idea here is pretty straightforward. Kagi Assistant has unique capabilities like personalized search and Lens. If Kagi were to expose these models via an API that's compatible with OpenAI's standards, it would open up a world of possibilities.
From my perspective, and I'm sure many of you share this, the benefits are huge:
- Seamless Integration with Self-Hosted Tools: I use self-hosted platforms like OpenWebUI and AnythingLLM for local/private RAG (Retrieval-Augmented Generation) setups. These tools already support OpenAI-compatible endpoints. An API from Kagi would make it a true "plug-and-play" solution, allowing us to leverage Kagi's excellent search grounding with our private documents.
- Privacy & Control: Keeping chats and AI interactions in a single, controlled environment is paramount. An API would allow for private, centralized storage of our AI conversations, mitigating concerns about data privacy and user data retention nightmares.
- Consolidated AI Subscriptions & Credit Usage: Like many of you, I'm trying to optimize my AI tool subscriptions. If Kagi provided an API, I could use my existing Kagi credits directly instead of paying for multiple platforms. This would greatly reduce the number of separate AI subscriptions I need.
I see this as a potentially low-lift, high-gain feature for Kagi. While I understand there might be concerns about token usage or abuse, Kagi already has robust pricing models and usage limits in place that could easily be extended to the API. Extra AI Credits bundles could be deducted from the API Credits, just as searches do.
This feature would also allow users to use their custom assistants as models and have other external models natively be able to search, summarize, and ground facts with the other Kagi APIs.