The Kagi assistant is a fantastic tool, but it has a significant performance issue. When you decide to keep a conversation so that it lasts indefinitely and does not expire after 24 hours, as that conversation increases in size over time, there comes a point where simply opening the conversation or just chatting becomes practically impossible due to the very low performance, forcing you to open a new thread and abandon the current one.
It would be interesting to apply virtualization to the assistant's conversation to only render the elements that are visible on the user's screen, plus a small margin buffer, and when scrolling, load in batches instead of having the entire conversation loaded and occupying resources.
How virtualization would work in the Kagi assistant:
- Calculate the total space that the complete list would occupy.
- Maintain a container with that total size.
- Only render the visible elements plus a small margin buffer.
- Update the elements according to the scroll.