Often-times I find myself blocking the same type prototypical SEO-spam websites over and over in my search results β something which I feel an LLM would probably be pretty good at automating! Given Kagi's openness to AI integrations thusfar, do you think there'd ever be a chance we could get (opt-in, off-by-default, obviously) LLM-based filtration of our search results based on our own personal blocklists, or maybe even based on some public database of opted-in user's blocklists? I just feel like there's probably a great opportunity here to take a bunch of volunteered website ranking data and factor it into the search algorithm from the get-go.
Like if I block 10 different SEO-spam auto-generated nothingburger websites, Kagi should be able to automatically detect that I really don't like those sorts of sites, and downrank them after asking me for permission.
I imagine there'd be two levels of this:
1) a less-effective, but more private option that uses zero-shot methods, where an LLM looks at the search results for a given query and compares them against content from pages I've rated as good or bad, and then makes filtration decisions based off of that.
2) a probably-more-effective, but more complex option where opted-in preference data from many Kagi users is pooled and used to finetune an LLM specifically for page ranking based on human preferences.
As far as interfaces go, I feel like a simple toggle in User Preferences to turn either of these two filtration methods on or off would be sufficient.