What does your feature entail?
Allow Kagi Assistant to perform multiple search queries. There are two ways I can see this being handled:
- The search queries could be directly determined in response to the user prompt similar to what is being done now, but each query could be for different information
- If the search query results are unable to provide adequate information for the LLM to answer the user's question, follow-up queries can be automatically made with different wording
Only multiple queries will be done if one is inadequate.
What is it for?
This is intended to provide more useful information to the user in any given response by Kagi Assistant. It is used for:
- Avoiding circumstances where Kagi Assistant is unable to find the information asked, responding with something akin to "I don't have enough specific information in the provided passages".
- Avoiding the need to send multiple prompts, one for each piece of information requested
How will it affect existing workflows or user experience?
The proposed feature is not directly user-facing, but it would affect the response of Kagi Assistant. It would not significantly affect existing workflows aside from improve prompt response by Kagi Assistant. It would result in fewer prompts and backs-and-forths for the user to obtain the requested information.
However, it could hinder user experience insofar that multiple search queries would likely take longer to perform, especially if a subsequent query was made due to one query not being able to provide the necessary information.
One user-facing change is the icon at the bottom right of the message linking to the query would have to change to a list of queries.
What are the exact ways that you see a user using your proposed feature?
The proposed feature is not directly user-facing, but it would affect the response of Kagi Assistant to improve message response quality.
Behavior in another product:
In an early version of then-called Microsoft Bing Chat (now Copilot), it used to explicitly list out which queries were used to formulate a response. You could see real-time feedback on what the search queries were prior to the LLM response, which there were often multiple. In some circumstances, you could see it "think" for a second or so, and then an additional search query would appear with alternative wording, presumably due to inadequate information in the original query to respond to the user prompt. It did sometimes take longer with these extra queries, taking extra seconds to perform the searches. Unfortunately, there is no longer UI feedback to the user about this so screenshots cannot be provided showing this.