astrodad I asked Kagi if there were any countries in Africa with the letter K. This was the result. It had the same results as Google, which is to say, incorrect data based on a Reddit post. It should not return clearly incorrect data.
Vlad astrodad That is pretty funny 🙂 AI we use is Claude 3 Haiku and it has some limitations. This is also clearly disclosed if you hover the info box. To use a higher quality models, we'd like have to increase subscription cost so it is a compromise.
Recast Kagi already provides a trust marker in the shield icon. If the user is asking a question about a factual topic, perhaps the AI use this to prefer using trusted websites as sources? It could be more wary of untrusted sites, forums, and sites with user-generated-content as they may be less reliable?
dj Question to fgpt : How many muslim presidents has the US had ? The response is : According to the available information, the United States has had one Muslim president, Barack Hussein Obama.[1] [1] If you ask it how many Muslim presidents the US has had, it will ... The source is a link to https://news.ycombinator.com/item?id=40461698 where a post relates the same question to Google AI. Fgpt retrieves the same response. So response from Gemini AI is wrong, a single page with this sentence is retrieved, it is a vicious circle. The right response. And may be a response with only one source it's too light to say anything.