48

Provide an API for the assistant similar to other Kagi APIs already available like FastGPT. An Assistant API would allow users to use various apps and tools that require connecting to an LLM.

All of these tools are compatible with OpenAI, so a compatible API would be the most useful. Having multiple LLMs available complicates compatibility though as you would have to select an LLM. Maybe the chosen LLM can be provided in the API endpoint URI?

Alternatively, just providing a pass-through to OpenAI's API would be great. You could pass along the request directly to the OpenAI API and tell users to consult the OpenAI API docs. In theory this would require no maintenance to the Assistant API even if the OpenAI API is updated.

Existing users of the web interface wouldn't be affected.

Some example tools that require an LLM API: paper-qa (https://github.com/Future-House/paper-qa), gpt-pilot (https://github.com/Pythagora-io/gpt-pilot), various custom chat UIs. There's a huge interest in this right now obviously so there are new tools popping up daily.

All other big LLM providers provide an API. Without an Assistant API, I'm required to buy an additional subscription to another LLM provider in addition to Assistant, and I don't want to have to pay both.

This is really a feature request targeted at all the developers out there who play around with this stuff, but I think your subscription base has a lot of those people 🙂

    5 days later

    Without an Assistant API, I'm required to buy an additional subscription to another LLM provider in addition to Assistant, and I don't want to have to pay both.

    No you don't actually. A OpenAI api key doesn't require a chatgpt subscription. There are people who don't have ChatGPT Plus but instead use the OpenAI api key with various interfaces. it's just pay per use.

    I don't want to be a buzzkill, but I don't think Kagi is likely to do this, because it could be easily exploitable to people who would just make Ultimate accounts and burn through their token allowance

      I feel like people can already abuse the assistant anyways. In any case Kagi could limit the total amount of tokens available if using the API.

        2 months later

        I would also love this for so many reasons.

        Is abuse the primary concern? Can someone share some insight into the constraints of the problem space?

          a month later

          I believe that the idea of the API would be with the assistant that accesses the internet, right? Because that is the biggest difference between the Assistant over any other LLM service on the market, always having access to updated information.

            9 days later

            I'd mainly like this so that others can build better Assistant interfaces that are stable (UI wise) and actually useful. When working with prompts that generate assets (like code) I'd like the snippets to not be visible in same thread, but to the side in an assets-sidebar in a way that I can reference them later in prompts, without having to scroll up and down so much. I'd also like the option to nest threads.

              bkrein These all seem like good ideas, personally I'd rather Kagi work on and refine the assistant UI with features like that than simply exposing an API and making users figure it out for themselves.

                RoxyRoxyRoxy me too! But I'm not sure if they have other priorities. It's also good to open things up for more participation.

                  5 days later
                  10 days later

                  I found this page because I was looking for this exact thing. Upvote!

                    15 days later

                    I'd like to use Kagi Assistant in my desktop and mobile email apps. API is the way to go. How about pay per use with a fixed credit for Ultimate users?

                    14 days later

                    I agree that this should be an option at least for Ultimate members, this would make the Assistant my "one stop shop" for all AI related tasks.

                      22 days later

                      this would be absolutely amazing to have, and i feel like abuse could at least be partially mitigated with strict but fair rate-limits for requests and/or a cut-off when the weekly/monthly token limit is reached. either that or just make it like the search api where requests cost a bit and you can top up your api balance based on your needs

                        Many code editors have LLM chat functionality either built in or as an extension, providing an API would allow one to chat without ever leaving code editor.

                          No one is typing