7

Currently the AI can help answer quick question and summarize search result it used, however, with recently released DeepSeek, it can be better.
The free version of DeepSeek already makes extensive use of internet search, the important part is it also makes its chain of thought visible to the user. By integrating with DeepSeek, The AI search feature activated by question mark can provide user with a chain of thought and potentially some keywords they can expand their search on.
Not to mention DeepSeek is extremely cheap comparing to other providers, this can make Kagi offer AI that has better knowledge base and accuracy.

For example: "CefSharp different settings for each controls?"
The AI response is incorrect since CefSharp does not offer ability to initialze settings for individual Cef browser control. Cef.Initialize is global for the lifetime of the application, it cannot be re-initialized.
By upgrading to DeepSeek, the accuracy can be increased, also, due to reduced cost, that feature can process more pages thus more likely to find a correct and usable answer.

    I'm not sure DeepSeek would be an improvement (https://kagi.com/assistant/32c9522f-dd95-47f0-ab23-fa59009a60c5):

    This is an example of your query, with internet access (DeepSeek seems to have issues with calling the ResearchAgent for searching)


    <think>
    Okay, the user is asking about CefSharp and whether it allows different settings for each control. Let me start by recalling what I know about CefSharp. It's a .NET wrapper for the Chromium Embedded Framework (CEF), used for embedding web browsers into .NET applications. Each instance of a ChromiumWebBrowser control in CefSharp typically has its own settings.

    Wait, but I need to confirm how the settings are managed. Are settings applied globally or per instance? From what I remember, CefSharp has both global and per-browser settings. The global settings are configured via Cef.Initialize, which affects the entire application. But for per-control settings, each ChromiumWebBrowser instance can have its own BrowserSettings, like enabling/disabling JavaScript, plugins, or WebGL.

    Let me think about how to structure the answer. The user probably wants to know if they can have different configurations for different browser controls in the same app. For example, one control with JavaScript disabled and another with it enabled.

    I should mention that global settings are set once during initialization, but per-browser settings can be adjusted for each instance. Examples of per-control settings might include JavaScriptEnabled, WebSecurity, or ImageLoading.

    Also, maybe there's a way to use RequestContexts to isolate settings further, allowing different contexts with their own preferences and cache. That could be useful for more advanced scenarios.

    I should provide a code example to illustrate creating two browser instances with different settings. Make sure to note that after changing settings, a reload might be necessary. Also, mention that some settings are global and can't be changed per instance, like command-line arguments or cache path.

    Wait, I should check if the per-browser settings are correctly applied. For instance, creating a new BrowserSettings object for each control and assigning it to the BrowserSettings property. Then initializing each browser with those settings.

    I need to make sure the answer is clear, with examples and important notes about what can and can't be customized per control. Also, mention that some features require using RequestContext for better isolation.

    Citations would be needed here. Since I'm relying on my existing knowledge, I need to ensure accuracy. If I were to call the ResearchAgent, I might look up the official CefSharp documentation on GitHub or their wiki to confirm how settings are applied. But since the user is asking for different settings per control, and I recall that BrowserSettings is per instance, I can proceed.

    So the answer would outline global vs. per-instance settings, provide a code example, mention RequestContexts for advanced cases, and note limitations. Make sure to highlight key points with bold and use code blocks appropriately.
    </think>

    CefSharp allows per-control configuration through its ChromiumWebBrowser settings while maintaining some global requirements. Here's how to manage different settings for individual controls:

    1. Global Initialization (Applies to all instances):

      var settings = new CefSettings();
      settings.CachePath = "cache";  // Global cache location
      Cef.Initialize(settings);
    2. Per-Control Settings:

      // First browser with JavaScript disabled
      var browser1 = new ChromiumWebBrowser("https://example.com") {
          BrowserSettings = new BrowserSettings {
              Javascript = CefState.Disabled,
              Webgl = CefState.Enabled
          }
      };
      
      // Second browser with different settings
      var browser2 = new ChromiumWebBrowser("https://example.com") {
          BrowserSettings = new BrowserSettings {
              Javascript = CefState.Enabled,
              ImageLoading = CefState.Disabled
          }
      };

    Key configuration options available per control:

    • Javascript: Enable/Disable JS execution

    • WebSecurity: Control cross-origin restrictions

    • ImageLoading: Toggle image loading

    • Webgl: Enable/Disable WebGL support

    • Plugins: Manage plugin permissions

    • Advanced Isolation using RequestContext:

      var context1 = new RequestContext();
      var context2 = new RequestContext();
      
      browser1.RequestContext = context1; 
      browser2.RequestContext = context2;

      This allows separate:

    • Cookie stores

    • Cache partitions

    • Permission settings

    Important Notes:

    1. Some settings like CachePath and command-line arguments remain global
    2. Changes to BrowserSettings after initialization require browser reload
    3. Use Cef.UIThreadTaskFactory when modifying settings post-initialization
    4. GPU acceleration settings are global in CefSettings

    For complete isolation, consider using separate processes through CefSharpSettings.SubprocessExitIfParentProcessClosed.

      I'm not sure that chain-of-thought models are a good fit for a "quick answer" application, unless they dramatically improve the inference speed.

        I would be curious to see if DeepSeek V3 produces better results for quick answer over 4o-mini (which I think is what is currently being used). I think R1 for quick answer may not be the best fit. Maybe when you "continue with assistant" you could specify the custom assistant/model used

          dps Specifying model to be used is a good idea.
          Current benchmark shows DeepSeek is well beyond what OpenAI has offered, at fraction of the cost. The simultaneous performance increase and cost reduction triggered the Nvidia stock landslide yesterday.
          R1 is advertised as 90% capable of the full model.
          Also DeepSeek offers their API at 1 million token per penny, at least its worth a try.

          I missed the reply above, I was blind.

            Nashwan Maybe add a button like "Opinion from AI" to activate COT?

              silvenga I'm not exactly sure how they achieve accessing other agent or web, their website has a special button for searching on the web.

              The buttons from left to right are "Deep Reasoning (R1)" and "Search on Internet"
              If I use same query, it produce result similar to the one in your link.
              https://chat.deepseek.com/a/chat/s/0f68b979-a77d-4e44-a6bb-978a03f8be4d
              The last query in the above is with that button checked, and it will search the web.

              Thank you for trying out though. I understand this may not be an improvement. However, hopefully the lower cost part maybe helpful.

                Dennis_Liu yeah, existing LLM's typically call tools or output special tokens to invoke the tools. Kagi's system instructs the LLM to call these tools and how to call them - it just looks like DeepSeek has issues with that right now (less mature weights?).

                I only bring up that example because DeepSeek's result appears less useful (and outputs a huge number of tokens). Running the same query with GPT 4o does return the correct result:


                In CefSharp, it is not possible to initialize separate instances of Cef with different settings for each control within the same application. This limitation arises because the Chromium Embedded Framework (CEF), which CefSharp is based on, uses a single browser process for all instances. As a result, the settings initialized during the first Cef.Initialize() call apply globally to all controls in the application[1].

                If you need different configurations (e.g., proxies or debugging ports) for each control, you must work within these constraints. Some approaches include:

                • Customizing per browser instance: Use features like RequestContext to apply specific settings (e.g., proxies) to individual browser instances without reinitializing Cef globally.
                • Separate applications: If completely independent settings are required, you may need to run separate processes for each control, ensuring they do not interfere with each other (e.g., by using separate cache directories and log files)[1].

                Let me know if you need further clarification or examples!

                [1]: Proper Way to Initialize multiple instances of CefSharp

                  silvenga That sure looks strange. I've had fun with DeepSeek simulating a perfect terminal and installed php.
                  My guess is that is related to how DeepSeek is trained, it generate its own COT on the fly and reflect on itself as it goes, this often cost a lot of tokens, unless you explicitly tell DeepSeek don't do that. However, performance degrades when telling DeepSeek don't do reasoning
                  Maybe DeepSeek is more suitable for a "learn more" or "more details" or "second opinion" button?

                    Perplexity just moved o1 and R1 from the model selector to an option that you can add to the existing models. At least that's how it appeard to me. Might be wrong, though. But an "add reasoning" option seems like a good approach to me.

                      15 days later

                      Hey! We continuously benchmark newer models to ensure the best default options for features like Quick Answer. The selection process considers quality, speed, service latency, and cost.
                      We'll include information about the base model used by Quick Answer in the documentaton (https://kagifeedback.org/d/5981-show-llm-model-used-in-quick-answer-info-box/5)

                      I'll mark this thread as nofix. Please let me know if this doesn't address or resolve the issue.

                      No one is typing