Vlad
As I imagine it, it's a combination of the features Kagi already offers to improve research with the Assistant.
The Assistant gets a predefined research mode - just like the predefined custom code Assistant, but for research. With this, in addition to the Assistant's output, image and video results are displayed on the topic being researched. These image and video results serve as sources and supplements to the Assistant's output.
These additional image and video results could come from Kagi's search results. In this way, you would have Kagi's image and video search engine and the Assistant combined.
For example, you're researching the colony collapse disorder. The Assistant could display additional graphics with charts from the internet in its output, showing the trend of the number of honey-producing bee colonies over the past decades, or link to YouTube videos that deal with this topic. If possible, a source citation from the text of the Assistant's output could be a YouTube video with a timestamp to verify the accuracy of the Assistant's output.
I hope that helps to explain my idea. As mentioned in the OP, I recommend trying Perplexity, since it's basically exactly what I have in mind here.