I have been test-driving Perplexity's Deep Research this week and observed that its quality and comprehensiveness were noticeably better for in-depth research queries compared to kagi's AI quick answers. This is not super surprising as Perplexity utilizes way more resources in Deep Research mode, but it shifted my search queries from kagi to Perplexity for certain topics.
I've also benchmarked kagi's new multi-step assistant against Perplexity's Deep Research results and while Ki offers slightly more comprehensive results than the standard AI Quick Answer, it still misses key pieces of information that made Perplexity's results so useful.
For example:
- when researching OpenAPI integration for a specific Typescript framework, Perplexity provided three different options, whereas Ki only presented a single, official solution.
- when searching for USB hubs that support broadcasting keyboard input to multiple hosts, Perplexity identified the crucial category name needed to find the few products that offer this feature. Ki only provided a generic, high-level response that didn't help in finding specific products.
I really like how AI summaries are automatically generated when a search query ends with a question mark. Perhaps using two question marks could trigger a more in-depth research mode?