I've found that with the Kagi Assistant, your answers are usually relying more on the top web search results than the model's internal knowledge.
When you ask using ChatGPT directly, it's relying more its training data to answer unless you use the search mode. This means it is pulling from a broader pool of knowledge for each answer, but is more likely to hallucinate on specifics or give an outdated answer. That's also why there's more variation when you ask the same question again.
When you ask GPT-4o in the Assistant, it's focused on using information from the top search results. This means the quality of the response is going to depend on the quality of the search results. In your case, it looks like the government's critics were underrepresented in the top results so you got a less critical answer. And because the search results will be the same each time you ask the same question in the same way, the responses will be more or less the same.