Kagi Assistant can now hide reasoning progress of LLMs by automatically collapsing the text between <think> and </think> tags. However, this can be broken when the text "</think>" becomes the part of the reasoning process. I'm not even sure if that's Kagi's fault, DeepSeek R1 should be escaping such tokens when providing output.
A simple "think about </think> tag" is enough to break it. Not a significant issue though.
The actual tags that encapsulate the reasoning portion of LLM output should be encoded in a different way that can't be injected by the prompter.