KagiFeedbackDuder
IMO, using more uncensored models brings us closer to how "ChatGPT and friends" were during their first and second generations. Current generation LLM's all tend to be extremely risk averse, regardless of your purpose or expertise.
I would say a few times a week I see a rejection. One completely blocked query across all Kagi's LLM's except DeepSeek, was a request to obfuscate the classic XSS exploit (I just needed something quick to test a WAF at work). That is a trivial request that any software engineer could make by hand in a few minutes - but blocked because it could possibly be bad?
On the subject of running these models locally - we aren't near to being able to run even smaller models like DeepSeek, consumers can barely run 70B non-compressed models using $4K of hardware (DeepSeek R1 is 671B).