19

Merging the two mode/keeping only one mode will make users less confused about which one to choose.

I haven't noticed much a difference between the quality of output of the two modes.

eg.

"Biden vs Trump 2024 poll"
Fast:
Based on the search results provided:

  • Most recent polls from 2023 showed Biden with a slight lead over Trump in hypothetical matchups for the 2024 presidential election.[1][2] However, some polls from late 2023 showed the race was close or Trump with a narrow lead.[3][4]

  • Polls conducted in individual battleground states in late 2023 generally showed Trump with an advantage over Biden on issues like the economy, foreign policy and immigration.[5]

  • One poll from December 2023 showed Biden and Trump in a "dead heat" in hypothetical matchups for 2024, with third party candidate Robert Kennedy gaining some traction as well.[6]

  • Support for Biden appears to have grown somewhat among women voters according to some late 2023 polls, potentially widening his lead over Trump.[2]

However, it is important to note that the 2024 election is still over a year away, so polls this far in advance can and often do change significantly closer to the actual election date.

[1] Gender gap expands between Biden and Trump, new poll shows
[2] Poll: Biden leads Trump in hypothetical 2024 matchup
[3] CNN Poll: Trump narrowly leads Biden in general election rematch
[4] Trump holds narrow lead over Biden in new poll | The Hill
[5] Trump Leads Biden in Nearly Every Battleground State, New Poll ...
[6] Biden, Trump in Dead Heat for 2024 Vote; Kennedy Gains Traction ...

Expert:
According to recent 2024 election polling, President Biden has opened up a 6 point lead over former President Trump (50% to 44%) in a hypothetical head-to-head matchup.[1][2] This reflects a shift from December 2022 when the race was tied between the two.[1]

The poll found a growing gender gap in support. 58% of women said they would vote for Biden, up from 53% in December. 53% of men said they would vote for Trump, largely unchanged.[1][2]

In a hypothetical race with Nikki Haley as the Republican nominee instead of Trump, the poll found Haley narrowly leading Biden 47% to 42%.[2] Haley performed better among independents, leading Biden 53% to 37%.[2]

[1] Biden opens up lead on Trump amid growing gender gap - The Hill
[2] Gender gap expands between Biden and Trump, new poll shows

"Who is Alan Turing"

Fast:
Alan Turing was a British mathematician, logician, cryptanalyst, and computer scientist. He is widely considered to be the father of theoretical computer science and artificial intelligence.[1][2] Some key facts about Alan Turing:

  • He was born in London, England in 1912 and died in 1954 at the age of 41.

  • Turing is most famous for his work at Bletchley Park where he cracked the German Enigma code, which was a major factor in Allied victory in World War II.[3]

  • In 1936, Turing published a paper on computable numbers that introduced the abstract concept of a Turing machine, which can be considered a model of a general purpose computer.[1][2] This laid the foundations for theoretical computer science.

  • Turing also made seminal contributions to mathematics, cryptanalysis, computer science, and artificial intelligence. He proposed what is now known as the Turing test to evaluate a machine's ability to exhibit intelligent behavior equivalent to a human.[1][2]

  • Tragically, Turing was prosecuted for homosexuality, which was illegal in the UK at that time, and was chemically castrated as an alternative to prison. He died in 1954 from cyanide poisoning in an apparent suicide.[3][4]

  • Today, Turing is widely honored as one of the most influential scientists in history for his diverse and fundamental contributions to many areas of science and technology.[1][2]

[1] Alan Turing | Biography, Facts, Computer, Machine ... - Britannica
[2] Alan Turing | The father of modern computer science | New Scientist
[3] 8 things you didn't know about Alan Turing | PBS NewsHour
[4] Difficult to decode: Alan Turing's life and its implications
Expert:
Alan Mathison Turing (1912-1954) was a British mathematician and early pioneer in computer science and artificial intelligence. He made important contributions in several areas:

Cryptanalysis: Created techniques to break German ciphers for Britain during World War II, including for the Enigma machine.[1]

Computer science: Formulated the concept of a "Turing machine", an abstract model of computation that defines an algorithm.[1] This led to the "Church-Turing thesis" that all computable functions can be computed by a Turing machine.

Artificial intelligence: Proposed the "Turing test" to assess whether a machine can exhibit intelligent behavior equivalent to a human. This test remains an influential evaluation metric in AI.[1]

Mathematics: Provided a formal definition of a "computable number" and then proved the "halting problem" undecidable in 1936.[1] This work laid foundations for theories of computation and computability.

Tragically, Turing was prosecuted in Britain for homosexuality and died in 1954, likely by suicide, at the young age of 41.[1] He made profound contributions despite his short career that continue to influence computer science and AI today.

Is there a significant difference between the two results? I think it's hard to tell.
when people click on "research", they don't need to choose which mode to use. the assistant will choose the best mode for them.

    I disagree.

    Sometimes I want to use the maximum amount of resources to answer the question. And I want to be able to force the mode.

    The difference between Ultimate and Pro plans would feel smaller.

    Also, I like the ability to choose between different models, in Chat as well as in Research

      KagiForMe

      Have you experienced a huge difference in the quality of output between different modes?

      I think what made me confused was that I didn't know what was the difference between fast and expert mode in terms of the output it generated. And sometimes I even experienced a reduced quality in expert mode.

      Could you please elaborate on how you

      "force the mode"

      and

      "want to use the maximum amount of resources to answer the question"

      ?

        mazil I agree that sometimes I've found Fast to generate a better answer than expert, but other times I've found the reverse (will try to catch examples next time it happens). As I understand it, the underlying models are different, for example Claude 2 for expert vs. Claude Instant, and a better image model in expert if you upload an image. Makes sense that the answers would be different as a result, and sometimes one answer might be better than the other. In general, my sense has been that the expert answers are better for more complex queries.

          CrunchyFritos

          thanks for sharing.

          I've been using !fast in some of searches lately if i want to get a summarized paragraph about my query quickly. It's pretty handy.

          I have never put an image into my query tho, will try.

          expert answers are better for more complex queries.

          what kind of complex queries? for me if i want to ask something more complex, I will probably jump to "Chat" and use those models there. (but i guess those models don't browse the web?)

          I think i am not very familiar with what each function can do right now. so I always run into decision paralysis given so many options that seem to accomplish the same task!

            mazil I usually just use !fast unless I feel like the question I have is a little more tricky or might require some nuance. For example, in the below screenshots, !fast provides an answer faster but misses the nuance that the more correct expert answer provides (i.e. the first version of TSMC N7 did not use EUV, and EUV was only introduced in a later version)

              CrunchyFritos

              Seems like for this query, the fast mode produces a lot more irrelevant information.

                Just to add my 2ยข:

                I have personally found that there are extreme differences between Fast Research, Expert Research, and FastGPT (which uses the same Claude-Instant model as Fast Research, but has a different UI and produces different results). I recently opened a bug report with a comparison between all three if you're interested in one example: https://kagifeedback.org/d/3141-results-for-when-is-lunar-new-year-do-not-provide-the-correct-date

                The Kagi docs describe some of the differences here: https://help.kagi.com/kagi/ai/assistant.html#assistant-modes

                My issue with these is that I would expect "Expert" to consistently deliver the best results, but it does not. I often find that FastGPT gives me the best results, which is counterintuitive. I think this has less to do with the language models used and more to do with the way Kagi integrates with search results. I've noticed that on the initial query, Expert usually has the fewest cited links (as in CrunchyFritos' example above).

                I appreciate having different modes, and it makes sense since the intention is to make them available to different subscription tiers. I just wish "Expert" were more like a straight upgrade. Because they each currently have pros and cons, I often load up multiple tabs to run the same query in different models concurrently. This is a waste of time for me, and potentially a waste of money for Kagi.

                  mhersh Fair point. That tracks with my experience as well. It would be great to get a little more clarity on this.

                    We are leaning towards simplyfing towards just one mode. Currently working on the overhaul of the assistant experience.

                      Vlad Looks great! It looks like this would be the landing page for all of Assistant (i.e. not just the Research Assistant)? So it looks like all the assistant modes would get simplified to just this one page?

                      I'm guessing the bottom right "Model" selection would then allow us to choose GPT 3.5 Turbo, GPT 4, Claude, etc?

                      So the old "Research" mode would just become an Assistant query using a selected LLM model + Internet Access + Upload Option.

                      And the other modes would in a sense now get both File Upload (a planned feature request I think) and Internet Access. For example, what used to be Chat using e.g. GPT 4 would now just be an Assistant prompt selecting GPT 4 as the model, and I would now also be able to add Internet Access and/or upload a file?

                      I really like it. It solves a lot of the feature requests (e.g. add file upload to the Chat or Custom modes) and also makes it clear what's happening with the old "Research" mode under the hood (i.e. select an LLM + add Internet Access with optional Lens).

                      I also really like how the Incognito is being handled. No saving by default, user needs to opt-in for each individual thread whether or not to save it.

                      A couple potential concerns:

                      1. Bang-Addressability: I use !fast, !expert, and !chat a lot, I wonder if that workflow would have an analog here.
                      2. Custom Assistants: Relatedly, I really really really like the custom assistant and would really really really like the option to have a few more with potentially slightly longer custom prompts, bang-addressability, upload, etc. I use it for things like summarization where I load in a pre-built prompt to help me summarize very specific types of documents that require industry-specific knowledge (e.g. legal briefs) where Universal Summarizer might struggle.
                      • Vlad replied to this.

                        Vlad

                        is this something similar to a reskinned chatgpt?
                        I hope kagi can build something specifically for doing serious academic research. Like combining google scholar and AI assistant. Don't know if that has been mentioned in another post tho.

                          mazil Based on the UI Vlad previewed, you could in theory do this by just turning on Internet Access and using the "Academic" lens. You can do this today using Research Assistant and turning on the "Academic" lens. Is that different from what you're envisioning?

                            CrunchyFritos

                            I see what you mean.
                            I think we will wait and see if other parts of the UX/UI really are tailored to the experience of doing academic research. for example, it would be nice to have some previews of the abstract of the journal articles, etc.

                              CrunchyFritos

                              Correct in your assumptions.

                              1. I think we can preserve some of them (!chat will launch it with internet access off for example)
                              2. Interesting! can you share more - use cases, how would you like to set them and access them?

                                mazil Not sure we will go that deep into a niche as once you cross a threshold you need a 20 person team to do just that one thing right. I can recommend https://elicit.com for this purpose

                                  CrunchyFritos Correct this will be our closes approximation as it will narrow the search space (and you can even create your own lens) and still use all the power of AI for research.