When entering a long query, particularly using Ki Research, I would like it to be possible to quit the open tab/window/web app containing Kagi Assistant and for the query to continue being fulfilled in the background β so that the answer to that query is visible once the tab/window/web app is reopened.
When using models, especially reasoning models, that additionally consult Kagi Search, response times are naturally longer. I want to be able to close the workflow and reopen it at the last point of user input, as this feels more natural than using the last point of the Assistant's response as the starting point.
My understanding is that this behaviour is replicated by other providers like ChatGPT as a token saving measure. But given that β due to Kagi's fair use policy β users cannot use more credit than they pay for, it would be good to give them the option for background query resolution. Could be an on/off toggle in the Assistant settings.