- Edited
OpenAI published the o3-mini model. It's a cost-effective reasoning model. Could it be supported?
https://openai.com/index/openai-o3-mini/
Alternatively, choose a reasoning model without DeepSeek.
OpenAI published the o3-mini model. It's a cost-effective reasoning model. Could it be supported?
https://openai.com/index/openai-o3-mini/
Alternatively, choose a reasoning model without DeepSeek.
I support this idea.
While I’m not familiar with the exact pricing (via API) of the o3-mini model, I understand that Kagi is committed to offering an unlimited experience.
However, incorporating complex models like this could potentially strain sustainability.
That said, the o3-mini appears to strike a balance between affordability and performance, similar to the DeepSeek R1.
Perhaps Kagi could explore introducing the o3-mini model on a limited basis—such as 10 uses per day with character or content restrictions—during an experimental or beta phase.
This would allow the team to monitor usage and evaluate its impact on resources.
If it proves to be both sustainable and valuable, it could then be considered for broader, potentially unlimited use.
Please can you add OpenAI's o3 mini model to the Kagi assistant? It's apparently a model that competes with DeepSeek's r1 in performance and price, but with a larger context window.
I wanted to try the model out, to see how well it does things like problem solve code bugs. I was also interested in trying out the larger context window, e.g. by looking for bugs in a codebase.
Hi, we've just released o3 mini in Assistant
antonio Wonderful. Do you have information on which model configuration is implemented? Low, medium or high? It doesn't seem to be specified in the doc, and the updated benchmark page refers to different versions depending on the tables.
GreenHummingBird Low, medium or high?
o3-mini (medium)
This was specified by Vlad on Discord. Currently the team does not see a point in supporting o3-mini (high) as it is significantly more expensive while performing only marginally better
This is done now, no? I can see o3-mini in my list of LLMs.
This doc needs to be updated each time. Right now it is lacking - https://help.kagi.com/kagi/ai/llms-privacy.html
Otherwise we would know what is implement, high or medium.
laiz Thanks. It would be good to add this detail in the doc and in the model list option for Assistant (and be consistent in the benchmark page, since as of yesterday at least it was referring to high version, which is irrelevant to Kagi users in model selection).
marcel001122 Yes they added it yesterday