The idea is simple: Let the users upload files for Kagi to index and add to your public search index. This way you can accelerate the inclusion of sources without having to hire people for it. For example, let's say books, torrents, PDF, or even spreadsheets. People and institutions are sitting on a lot of useful data sources that are not online, or are not indexed properly.
The user would upload a file or a folder with her data to an interface provided by Kagi. Depending on how long it takes to index, Kagi could show the user some result of the processing or e-mail it later to the user. After this, the content becomes part of the index, accessible by Kagi to search, and for the AI assistants such as Quick answer to process.
Here is the important part: As we're not dealing with websites doing this, there will be a limit to how a user can accesses the data after the search has been made. I think it will be perfectly fine for Kagi to not link further to the source. But just having the source indexed and the index searchable by all users, means that they can look for the source after confirming (with a snippet) that they've found what they're looking for.
As a simple example: Let's say a user uploads a database of subtitles from movies and TV-series. Now everybody can search for a quote they heard in a movie and Kagi can show where it's from (even get a timestamp from the database). Of course, they won't be able to get straight to the movie from their search results, but this way Kagi has already helped with a common search.
Kagi could even delete the original uploaded files after they have been indexed, so that only the index remains. (I don't know if this is how things work).
Footnote: This is related to my previous suggestion to have Kagi index all the books on the internet, but expanded to all forms of data.