Vlad Thinking about it, it might not be clear enough. Knowledge of the limitations of LLM (e.g. plausible but hallucinated outputs, bias, sandbagging, logic & math) is not widespread, and there is no information about this in the Kagi documentation.
Would be nice to have a link in the tool tip, that links to a Kagi documentation page explaining general limitations of LLMs or something like that.
At the moment it seems that Kagi users are mostly tech savy, but when Kagi grows its userbase, more & more "normies" will flock in. Managing user expectations will then become crucial.