I'm curious if there is an existing caching mechanism within Kagi that could be leveraged to "best effort" a solution that improves over time.
Constrained metadata should be easily cacheable in storage and compute terms. Indexing isn't necessarily required. I have no idea what backend constraints exist, but multiple api-queries per search could be explored. It's a self-feeding capability that would grow and improve over time, and i'd image it's applicable across a vast array of api backends.
This problem reminds me a lot of dns forwarder cache management. The latency requirements, data sizes, and hot-cache-equilibrium states seem very similar.
One point of note that @Thibaultmol brought up, which i feel is worth calling out specifically: youtube video metadata is subject to change. this is a very important point, but funny enough may be irrelevant to his use case. I don't think that 'most' changes to youtube metadata are relevant to search-by-date related use cases.
This leaves us with what could be a very constrained use case that could be served by a nothing-fancy key value store with some salt and pepper. I'd bet something well suited to this task is already part of the stack. Ok... that's starting to sound down-right feasible.
There remain a lot of interesting legal, technical, fiscal, and user acceptance blockers orbiting this idea, but it might be worth exploring. One to comes to mind is whether or not a best-effort solution is good-enough to satisfy the ask. Are best effort results acceptable to the user? Does it align with Kagi's principals 🤷 I'm very curious.