The Idea
Similar to how users in Discord can ping/command bots within a server for specific purposes, I am suggesting a similar feature in Assistant.
The "consulting AI model" can be defined with these three traits:
- A name (could be user-defined, or just the model name itself)
- The underlying LLM (eg. 3.5-sonnet, deepseek-v3, etc)
- A custom system instruction [optional]
In a thread, a user would be able to call this consulting AI model by "pinging" it in a message - this action is similar to switching the model manually. The consulting AI model would then respond to that message, with the context from the prior conversation.
After the consulting AI model responds, I think that the original LLM should reply to subsequent messages, with a button to fully switch to using the consulting AI model instead.
Example Use Case
For example, imagine having a consulting AI model named "Bob". Bob's underlying model is a model that I prefer to use for coding tasks, and has a system prompt which I configured so that it implements code in the language, format, and style that I want it to.
If I have a task I want to write a program to accomplish, I might
- Use a reasoning model to figure out an algorithm to satisfy a task
- Ping Bob
"@Bob help me implement this"
Such a thread might look like this:
User: [problem]
DeepSeek R1: [algorithmic solution]
User: "@Bob help me implement this"
Consultant Bob: [implementation]
Subsequent replies are still handled by DeepSeek R1
Implications
This feature could possibly allow users to define multiple specialized AI models which are well suited to specific tasks, and seamlessly call upon them while in a thread with another AI model.
Credits
The idea for such "consulting AI models" was first suggested by DoorHandle and elaborated by me in the Kagi Discord