112

Absolutely, mind-numbingly ridiculous reply, stance, and worldview. chexx

I would love to hear what the alternative should be then, and why I should get off my high-horse. Nazis are a reality, they exist today in a pathetic but yet potent form so I believe it is necessary to bring up the case of Nazi content being disseminated by Kagi.

his abuse of this platform by spreading such hate does a disservice to the Kagi community Nigel

What hate am I leveling and against who? All I have been touting is that there is more to this world than just the optics produced by technology. The biases produced by technology being made by predominantly white men is bad, but I never argued whether the white men behind those developments are bad people.

EDIT: and what abuse?? I have no say in this whatsoever, but I wish to remain a nuisance as I believe this to be an important conversation to have. Vlad seems adamant on his viewpoint, which is disappointing to me and others.

I know it is difficult, but we'd like to keep the feedback forums emotion-less. So please focus on having a merit based discussion..

This is a sensitive topic which is why I feel we as a company need to have a clearly articulated and well argumented stance. I feel I have given it multiple times throughout this thread.

The last few posts have certainly strayed away from being merit-based. I am locking this discussion for a week (if I forget to unlock it please remind me on Discord) to let everyone cool-off so that we can hopefully continue a merit-based discussion in the future.

    4 months later
    7 days later

    Vlad

    So please focus on having a merit based discussion..

    I can understand how you feel, that a lot of the discussion is not actionable for a company, so I made an account to talk about this, as it is important to me. I'm currently a free user, and I was planning to get a subscription once I'm out of the free 100 searches. This, however, is actually kind of a big roadblock.

    I don't want to have to use my last few kagi searches on finding a new search engine.

    I've personally had to intervene in the prevention of self-harm, and I have strong feelings about what individuals and groups can and should do to help protect each other. We can provide mutual support in this small way and possibly divert someone who WANTS diversion.

    The hard facts, from a corporate perspective:

    • Search results that contribute to your users' deaths will reduce annual revenue
    • Services with a user experience that includes facilitating the death of the user have inherent anti-user bias
    • Not including suicide diversion information can create the perception that the company doesn't care about users in crisis

    And in a less corporate thought pattern:

    • Many users seeking information about self harm WANT someone to intervene, or to stop them
    • Adding a diversionary note is good UX for those who want intervention, but not detrimental to those who do not, as it can simply be ignored and the desired content obtained
      • A special widget doesn't change the actual search results
      • An appropriately designed widget can be made totally distinct from the search results (this is already the case for Kagi widgets IMO)
    • If the idea of seeing lifesaving information is abhorrent to some users, a configuration can be added to disable diversionary and warning notes

      andrea Just to back @andrea's important point up, I have collected a small selection of academic resources on the subject. While I am a (PhD Student) Researcher, suicide research / prevention is not my field. Hopefully someone who does have experience in the field can chime in here. Despite this, I have done some digging and found some scholarly sources that explain the problem and could be useful for context.

      Mann JJ, Apter A, Bertolote J, et al. Suicide Prevention Strategies: A Systematic Review. JAMA. 2005;294(16):2064–2074. doi:10.1001/jama.294.16.2064

      Overview of suicide prevention and interventions. Relevant quote: "The Internet is of increasing concern, particularly the effects of suicide chat rooms, the provision of instruction in methods for suicide, and the active solicitation of suicide-pact partners."

      Durkee, T.; Hadlaczky, G.; Westerlund, M, et al. Internet Pathways in Suicidality: A Review of the Evidence. Int. J. Environ. Res. Public Health 2011, 8, 3938-3952. doi: 10.3390/ijerph8103938

      Open access. An overview of how the Internet relates to providing pathways towards and away from suicide. Interestingly points out that searches like I want to kill myself etc resulting in suicide prevention resources did indeed produce a net outcome of fewer people dying by suicide.

      If anyone here more experienced in the field discovers issues with the resources I link to here, please inform me and I will do my best to improve the accuracy of this comment.

      I implore @Vlad to reconsider on this specific matter. Providing suicide prevention / helpline information in a widget for relevant searches can and does save lives, and we have the data to prove it.

        @andrea @sbrl

        It is only fair to ask you to read the entire thread before contributing as your suggestions do not address concerned raised at the very least here:

        https://kagifeedback.org/d/865-suicide-results-should-probably-have-a-dont-do-that-widget-like-google/2

        https://kagifeedback.org/d/865-suicide-results-should-probably-have-a-dont-do-that-widget-like-google/11

        The issue is not narrow at all and any solution we implement needs to be principled and scalable. So far, the only such solution is for us to not make a precendent and do not interfere with users's search results. This is both a principled and scalable position.

          9 days later

          Chiming in to say "Good thread". I think there's good points being made on both sides (and also bad points on both).
          I don't really have skin in the game, but @Vlad keeps referring to customers as "adults" while Kagi also offers a family plan. The advertising mentions "strict content filters to ensure children are not exposed to harmful content" (src) which seems almost like Kagi is taking a moral stance that children need to be protected more than adults?
          In the end Kagi as a company can do what they want and us users can "vote with our dollars" 🤷.

            MomentumBuffet

            The advertising mentions "strict content filters to ensure children are not exposed to harmful content" (src) which seems almost like Kagi is taking a moral stance that children need to be protected more than adults?

            These are controlled by parents and can be turned off. Sane defaults apply (safe search on).

              11 days later

              Vlad Note regarding the concept of "Precedent"

              Different law systems handle it differently. In CommonLaw vs CivilLaw. In CommonLaw it is handled as an absolute, therefore all laws and thinking needs to be thought of in the concept of what came before.

              In CivilLaw, it is left to the jurisprudence of the Judge and Jury. Different cases which might share overwhelming similarities can reach completely different outcomes because the Judge overseeing the case considers it to be the right way to approach the situation therefore and hopefully achieving justice, rather than being hamstrung by what other judges in different cases and different situations decided.

              I say this because I think that even when holding precedence, and ideally consistency in the direction of Kagi as a worthy goal. precedence in on itself is at least from my perspective, "something to keep in mind", and not a goal on itself. After all when metrics become targets they stop being useful metrics.

              Anyhow. Even when I agree that it is generally a bad idea to meddle with wherever things the user is searching, even for a pre-determined index result. I do also believe that having your users kill themselves because they are on a temporary bad place when you could have made a nudging difference on avoiding it... Is generally "overly formal/bad practice" to your presuppositions of what "Kagi ought be"

              Either way. Regarding scalability;

              1.- Reduce scope. It doesnt need to be a expansive warning. Or have a phone number at all. It can be a simple personaly written message: "We at Kagi do not condone suicide, and do not recommend absolute decisions with absolute outcomes based on temporary issues." then machine translated to varied languages.
              2.- Keep tied only to Suicide. Suicide is an overwhelmingly bad outcome. So just decide to break precedent and have wherever message you decide only appear when a search for suicide appears rather than a generalized thing.

              But yeah, these are my perspectives on the issue. I read most of the thread and I kept seeing different people mention things and getting tangled on the concept of Precedent and then the issues of Scalability pop up, I believe that this comment addresses both points.

                sbrl While I am a (PhD Student) Researcher, suicide research / prevention is not my field.

                If you don't have expertise in this field, why did you consider it important to mention that you are a PhD student? Was it done in the hope of gaining more credibility despite having earned none in the matter at hand?

                15 days later

                This discussion is depressing. Here's my attempt to build on the initial (in my view extremely meritorious) suggestion and what people have already discussed:

                Kagi already enables safe search and safe image search by default (at least that's what it seems like it does to me). Kagi thus already has a precedent for how it curates its results: it omits NSFW results by default to make the experience for users ostensibly more pleasant.

                Instead of playing widget whack-a-mole, I'd suggest adding some settings to customize what "safe search" means for users. Right now, I presume it omits just adult material. Kagi could add two additional sub-toggles(?) that would both be on by default:

                Sub-toggle 1: Omit adult NSFW material of a sexual nature.
                Sub-toggle 2: Omit harmful content that could lead to loss of life or injury.

                When a user searches for "how to sacrifice squirrels to Odin", Kagi wouldn't list "Top 5 ways to kill squirrels in the name of the Norse gods". If a user wants to uncheck either toggle, Kagi could perhaps surface a dialog that warns the user about what they might see in their future search results, and how just because customers are using Kagi for questionable ends, that doesn't mean Kagi endorses or condones them. Maybe also prominently surface suicide help resources the dialog for when the user unchecks the second toggle.

                I can see this being a much more scalable way to ensure search results don't include questionable content that I don't think anyone should be seriously reading or consuming. It'd let the product operate on a more rules based approach rather than making individual widgets and experiences for individual topics. But Kagi would still be letting users configure their experience to be more unrestricted if they wished.

                  7 days later

                  What Vlad seems to miss here in my view is that including 0 "nudges" requires just as much defense and is just as much of a position that must be justified as including 1, 2, or so on. Slippery slopes do exist, but no one in this thread has given a reason why including 1 message like this means they will be compelled to do 2 or 10 or 10000. For the most part I just want Kagi to get out of my way and do not want its commentary on my searches despite recognizing that search is a task which is inherently a matter of balancing biases.

                  But like, what's the practical reason not to include this? The very worst it can do is nothing, and its goal of saving lives is uncontroversial. If the only reason not to do it is a fear of the slippery slope, then please tell me what makes the slope so slippery. Do you really think someone will see that one warning message and think "Oh, this gives me ground to argue there should be more warning messages about other stuff! I'll go to their forums now and annoy them!"? Of course they won't, you'll get requests for adding warnings for whatever pet issue a person has regardless. And whether you have 0 or 1 or 9999, your choices are to ignore it or to argue for your position. There's nothing saved by choosing 0, no additional consistency in your principles which you cannot get by making a different choice.

                    4 days later

                    Vlad

                    Well I ended up paying for a subscription so so much for voting with my wallet.

                    I do want you to know that I read the entire thread at the time that I made my comment.
                    I'll try to address the items you pulled out now.

                    Interesting, where do you draw a line there for providing a 'help widget'? What if you want to rob a bank? Kill an animal? Commit fraud? Hack someones computer? When should we decide that we should "intervene" and produce something the user did not search for? If we do, there is endless ways more can be requested, annoying people who are genuinly curious.

                    Since I proposed a widget that appears outside of the actual search results, the intervention is minimal and still does produce the desired results, but simply offers an alternative "off ramp". Expanding to other intervention areas doesn't seem very useful, but could still be possible and disable-able in the same way that I mention for hardcore self-harm/crime users.

                    The second is more philosophical but I think the unresolved portion is

                    Thus the best option for us is to simply refuse to make the first precedent no matter what the pressure is and stick to search being search. Perhaps one day Kagi may become your 'assistent' with personalised biases, but for now it is just a search engine.

                    In which case I am suggesting that this harm reduction strategy is, indeed, a bias towards trying to dissuade suicide. And that's ok. And if people think it's not ok, they can turn it off, and never see a little widget above their search results for suicide.

                    And because I also read the replies afterwards I want to discuss Whom's message

                    But like, what's the practical reason not to include this? The very worst it can do is nothing, and its goal of saving lives is uncontroversial. If the only reason not to do it is a fear of the slippery slope, then please tell me what makes the slope so slippery. Do you really think someone will see that one warning message and think "Oh, this gives me ground to argue there should be more warning messages about other stuff! I'll go to their forums now and annoy them!"? Of course they won't, you'll get requests for adding warnings for whatever pet issue a person has regardless. And whether you have 0 or 1 or 9999, your choices are to ignore it or to argue for your position. There's nothing saved by choosing 0, no additional consistency in your principles which you cannot get by making a different choice.

                    Yes, this certainly seems like the best one-time exception, never to be repeated.

                    Lurker here. I actually agree with @Vlad's scepticism (for the above mentioned reasons which I will not repeat).

                    But some people want this 'feature'. So why not implement @robertoftheedwards's proposal?

                    robertoftheedwards Instead of playing widget whack-a-mole, I'd suggest adding some settings to customize what "safe search" means for users. Right now, I presume it omits just adult material. Kagi could add two additional sub-toggles(?) that would both be on by default:

                    Sub-toggle 1: Omit adult NSFW material of a sexual nature.
                    Sub-toggle 2: Omit harmful content that could lead to loss of life or injury.

                    Combined witha sensible default, this seems like a good compromise.

                    Thoughts?

                      KagiForMe If they are opt-in, that means the person 'in need' would need to know to enable them first, which kind of defeats the purpose.

                      This has been a long lasting thread around a complex issue and I thank everyone for their input so far. I am, as many of you are, emotionally moved by this question on a personal level. At the same time I do not want to allow the business that is meant to serve users globally be led by mine or any employee's personal emotions and biases and this is why for Kagi we have principles of operation. One of them is that Kagi is focusing on the business of search and providing the best search results in the world, avoiding precedents that would take it off that path.

                      In general I would want these three things addressed:

                      • Evidence that Kagi results for suicide queries are currently not adequate. When I check them I conclude that situation is completely opposite and that Kagi has the most pro-life results (in context of suicide) of all search engines in the world already. The reason this is the case is that we surface many more results from personal/blogs sites (as a part of our humanize the web mission and Small Web initiative) and many of them have personal stories of attempt/regret etc. I'd argue therefore, that even without the widget results are more humane than any other search engine. Being able to get real stories from real people in results is a much better way to handle this in terms of desired outcomes we all want to see, than any sort of generic widget (and this is all happening completely organically by Kagi's design).

                      • Evidence or research that suicide hotline widgets actually help in meangiful way. (if they are as effective, such evidence should be possible to produce)

                      • If Kagi is to cross the line here and interfere with results for moral reasons, what principle would we use to handle further matters regarding queries about for example homocide, torture, abuse, animal cruelty and other queries that are sure to get scrutiny from another group of users down the road, asking why the cause they care about does not have a warning widget when we have one for suicide, and they feel much stronger about their thing. How do you deem one set of those morally acceptable to be left out unattended and intervene with others? This is the slippery slope question.

                        Vlad Asking for evidence is fair. I have a couple ideas about that, but I think they'd be a bit off topic for this thread.

                        Re: the principle, I've personally found a lot of apps and services don't appreciate enough what it means to have some feature be default on or not be meaningfully configurable for every single person, globally. Suicide is easy to say it's bad across cultures. Other topics - say religion - can be a lot more polarizing. I've always liked how many ad blockers like uBlock Origin and others let users easily define their own rules in addition to whatever defaults the service comes with out of the box. Many times users can throw those same defaults out the window if they don't like them.

                        If and when Kagi wants to wade further in (more than having safe search) to the murky waters of curating search results based off the subject matter of the results, I'd really suggest you do so in a way that lets people easily define the rules that curate the results in such a way that they can feel that Kagi is reflecting their morals and ethics, and that Kagi isn't imposing its own on them.

                        In a possible future for such a feature, if I were initially guided through that feature and told "this rule omits results about harming people or animals, you can turn it off though", I'd probably keep it on. I'd still really appreciate in addition being told "you can fine-tune this later as well if you find it's interfering with your experience using Kagi."

                        Carnival0852 Let's try to keep a language of constructive discussion.

                        Having said that browser extension is not a bad idea in general, one that could offer necessary support following the user around the web.

                          Whom

                          including 0 "nudges" requires just as much defense and is just as much of a position that must be justified as including 1, 2, or so on.

                          Deciding between zero, and one, two, 9999, etc, nag screens isn't, in fact, arbitrary. There is an asymmetry between zero and non-zero; It takes manpower and money (my money!) to build maintain, and support each non-zero nag. It takes none of my money to have zero of them, and allows those developer resources to be used on actual search features.

                          What would Kagi spending my money to build this feature for you provide you that you installing the Ripple suicide prevention browser plugin wouldn't? The browser plugin is objectively a more comprehensive and robust solution as it works across the whole web instead of just one domain.