I'm unsure what the Kagi Study LLM uses for models or its prompting, so that might be helpful to know, unless it's an in-house secret sauce sort of thing. Same thing with the Coder model.
Anyways, my CISSP study workflow contains the following,
CISSP Tutor - Opus 4.7 (reasoning) - Teach me about XYZ concepts that I'm unsure of
CISSP Research - Research (Experimental) - Check the web in detail for updated material on a topic
CISSP Validator - Opus 4.7 (reasoning) - Review my answers and give me a detailed, harsh breakdown
CISSP Exam Simulator - GPT 5.4 Mini - Rapid-fire exam simulator questions. I'll usually know if something is a hallucination or not, and will usually cross-reference with the Research or Validator agent
I can't give specific examples at this time, but there are times when the Study agent will present me with incorrect or impossible scenarios, along with solutions that are entirely incorrect. While the method used to teach me is much better than the agents I'm using, providing it with the proper guardrails and SME references is not an option.
A good example is with Claude Code, where you plan something out, using either predefined custom agents or having Claude define and create them on the fly, and they collaborate to form the final product.