Recent legal and regulatory movement suggests that AI search results that summarize and generate answers might be at legal risk, and honestly there are genuine concerns with how such practice will impact the internet long term. At the same time, AI clearly can add value beyond surfacing and storing links.
I suggest that the AI function of search could be:
- generate and present to the user a small set of criteria that would describe an ideal search hit.
- allow the user to adjust the weight or critique the criteria (maybe directly or in an AI collaborative chat)
- display results ranked based on those criteria. List would show how each result scored on each criteria, possibly with an explanation of why the LLM determined it would answer the user's query.
This way the user gains insight from the LLM processing the content, and can use that to directly improve their results, without just coopting the original authors work.
There are likely better ways to skin this cat but I think this at least shows we can plausibly do LLM enhanced search without fair use issues and in a way that is more true to an actual "search" function.
I've never seen something like this elsewhere.
I would imagine perhaps two columns, with results primary but the "criteria" clear and prominent. Criteria are color coded and the results have correspond color dots with some goodness indicator (I like numbers but maybe up/down chevrons would be better visually) inside. Criteria would be small in number (2-5) so colors should be easy to make compliant in greyscale, and fit in a single screen at desktop scale.
Criteria would have UI buttons for editing and increasing/decreasing the weight. Maybe an icon like a weight with + or - inside?