I understand wanting to characterize this as an improvement rather than a bug on your side so as to knock the priority down a peg and so it can be prioritized against other ongoing work more realistically.
However, and I’ve seen this happen before, please keep in mind the valid discrepancy between your lens as implementers, and our lens as users. I don’t think it’s entirely accurate to characterize “research assistant refusing to respond to queries” as simply a known gap in functionality. If the goal is to be model agnostic then that’s not happening. The details of the model are very much leaking through at the moment, and it’s causing a poor product experience. So as a user, research assistant is clearly “not working” for certain queries and exhibiting behavior that changes over time, which, considering a moon landing query used to work, is what makes this present as a regression to users. We don’t care about the details of the model, we care about the finctionality.
Ultimately I agree: the value add is that Kagi is doing the leg work here to make llm enhanced search work well. It may be appropriate for a free dime a dozen chatbot to refuse to get into the nuance of moon landing conspiracies, but it’s not appropriate for a powerful paid research assistant, and that’s simply what’s being raised here.
As a paying user and investor, I’m happy to throw a plus 1 on a feature request feedback if that’s the preferred route and once one exists. And I do understand this is a difficult problem (and trying not to be overly critical) as I’ve had to deal with similar issues using llms in consumer products myself.