112

When searching for keywords like "kill self how to", google returns a widget that links to a suicide hotline, while Kagi just returns actual results on how to kill yourself.

  • Vlad replied to this.
    • Best Answerset by Vlad

    We decided to close this discussion as it fizzled out and it was unlikely that we would come to any new conclusions. If you think you have a new angle on the topic, please open a new thread.

    Here is our final take on it:

    As a search engine Kagi is used to find a wide variety of information to inform and improve its user’s lives. There is, however, a wide variety of content online that some may find distasteful, problematic, or immoral.

    This raises difficult questions about whether Kagi should censor information found on the open web or augment results with messaging to inform the user they may be making poor choices.

    We do not censor information and we take our responsibility to serve our users and society seriously. When evaluating these types of decisions we use the following principles.

    • Be transparent about how we think about and make decisions; we seek alternative perspectives and feedback and engage in discussions in public forums.
    • These are decisions where reasonable people can disagree. We try to ensure that difficult decisions are based on well defined and agreed principles rather than arbitrarily made.
    • If we cannot define workable principles we seek to create accountability for the people making what end up being arbitrary decisions.
    • Ideally many of these decisions with societal impact are made not by a company's employees, but by representative governments. We abide by all applicable laws governing what information people should be able to find online.

    SyST3MDeV Interesting, where do you draw a line there for providing a 'help widget'? What if you want to rob a bank? Kill an animal? Commit fraud? Hack someones computer? When should we decide that we should "intervene" and produce something the user did not search for? If we do, there is endless ways more can be requested, annoying people who are genuinly curious.

    And Google with all its power and resources seems to show 'help' for english query only, but no "help" when I searched in couple of foreign languages.

      Vlad I completely understand the hesitance here, I moved to Kagi specifically because I hated how often google would try to get "smart", then fall on its face because of it. I would argue that this is a special case as this search indicates a direct potential threat to human life, something that at least personally overrides any morals about just letting the search algorithm do its thing. Additionally I'd argue that the potential benefit (saving a human life) vs the potential annoyance is worth it. Once again though, I understand if this won't be implemented to avoid the kind of clutter google usually produces.

        SyST3MDeV It is a matter of trying not to make a precedence. There a lot of types of searches that present a 'potential threat to human life' and I assume you searched that without suicadal thoughts (the key being potential, so not all searches will qualify). Anyway lets engage the user community here - I would like others to chime in, should Kagi go down the path of 'helpful' widgets that do not address the original query?

          I have to say as awful as it might sound, I'm against adding this. Injecting anything like this even if it's the "right" thing to do is a slippery slope that I hope Kagi never takes.

          I'm also against adding this feature to Kagi.
          I'm a loyal reader of https://postsecret.com, so it's difficult for me to ignore this claim... but I also think that the Kagi identity building is a process that could be helped from: transparency, cleanliness, uniformity, predictability.
          So I'm empatic with SyST3MDeV, but nonetheless I'm to avoid whatever special treatment comes in the future.

          Vlad You talk in the development blog about developing APIs for finding the most relevant search results. I wonder if it might be prudent to return a suicide hotline website as the most relevant result for someone searching for how to suicide.

          Decisions like these are likely to be never-ending, unfortunately. In designing an algorithm to find 'relevance', one is making curation decisions, and those decisions have ethical consequences. I wish you luck, sincerely! I hope that employing an ethicist is on your funding horizon, even though I'm sure it's not an immediate priority.

            HeartleafGames

            a suicide hotline website as the most relevant result for someone searching for how to suicide.

            Strictly speaking this is not relevant to the search query which is what we are discussing (search engine deciding to be 'smart' on the behalf of the user).

            Related, are there any studies that show that helpline widget actually helps people in these cases?

            Decisions like these are likely to be never-ending, unfortunately.

            Only when you decide to make a precedent, you open a pandora-box of never-ending "policing" of results, which is why we are pushing back on this..

            If you stick to search being search and do not impose biases - moral or any other kind - you do not have these problems. You let the search surface stuff based on relevancy algorithms, not human curated decisions.

              Vlad Algorithms are written by humans, and ‘relevance’ is often not an objective measure. It requires interpretation, language processing, weeding out unwanted or falsely popular results. In other words, curation.

              You have the advantage of being post-Google, and able to see all of the stumbling blocks they’ve had to get over. The data surrounding algorithmic bias is clear, and no academic institution I am aware of teaches that algorithms are magically objective in every use case. Unintended side effects will occur, your blind spots will bite you, and you will need to address that in the future. The alternative is a system that does harm, and a userbase that recognizes that harm (and consequently, a lack thereof).

              It is disappointing to hear that you believe an algorithm you write is somehow wholly immune to bias. A service cannot abdicate their responsibility for deciding what content it shows to people simply because no one deliberately or directly made it happen. Your algorithms (and thus, you) are the arbiter of what information someone sees when they search the Internet. That comes with duties beyond writing the algorithm that shows an organically popular result.

              I implore you to recognize that responsibility.

                HeartleafGames

                This is getting a bit philosophical but warants further explaination.

                I am not claiming Kagi's output presents objective truth (nor is such thing possible for various reasons), but that it does not have any implicit bias built by us, that is not search quality based. For example, we are open about having an implicit search quality-based bias: towards sites that have lots of ads and trackers, downranking them in results.

                What we do not have is moral, political, religious, social or any similar kind of bias which would needed to be explicitly coded in the algorithms and would represent moral, political, religious, social views and values of the company. The reason why we don't is that we are not in the business of advocating those moral, political, religious, social views but in the business of search.

                We simply let the results happen based on search quality factors alone (note that since Kagi gets some of its results from third parties; these may inherit such unwanted biases from those 3rd parties, but this is out of our control and by using data from multiple sources we hope to negate/balance out those factors to some extent).

                Adding a widget for suicide prevention hotline on top of 'how to kill yourself' result would be manifestation of one such bias - in this case moral - meaning, human interference into search results for moral reasons.

                While I can sympethaisze with that as a human, I am also aware that making a precedent with any non-search quality bias would lead Kagi down the slippery slope of endless policing of results. And we already know that it is impossible to do well at any scale, largest tech companies with huge resources are the prime example of failing to do it properly. So why expect Kagi, a 10 person startup, to do that well?

                The very basic question of what do you police (or don't) next is not answerable. For example 'how to kill an animal?', 'how to rob a bank?', 'how to hack a computer?' - although objectively less impactful than the original example, would eventually draw attention from sufficently large groups of people who will passionately call us out on not doing something about these queries on the same moral grounds. And again this never ends, you end up being in the business of pleasing everyone. Good luck with that.

                Thus the best option for us is to simply refuse to make the first precedent no matter what the pressure is and stick to search being search. Perhaps one day Kagi may become your 'assistent' with personalised biases, but for now it is just a search engine.

                  Vlad I just want to full-heartedly say, thank you. This is exactly why I came to Kagi and why I intended to pay for it. It's one thing for every other search engine having their financial interests aligned with advertisers, instead of users, controlling what their users see, but it's entirely UNPRECEDENTED IN ITSELF to have a search engine financed by the user still end up messing with the user. This alone creates a strong argument in itself for why even if other search engines are doing it, it should never be ok for user financed search engines like Kagi to feel pressured to fall in line like the rest as you have a well valid reason on-top of neutrality to not do it.

                  Plus, without a proper search engine you can't really use the internet. I consider the relationship between search and the internet to be like the saying, "if a tree falls in the forest but no one is around to hear it, did it really fall?" being very much like the search result you're actually seeking found on page 357th of results being no different then if it didn't even exist on the internet in the first place. The search engine is the compass for navigating the forest of the internet. For suicide or robbing a bank or any other argument that may have merits for creating an exemption just this once, it's good to keep in mind that if a website was REALLY that bad, the government can take it down with the help of Interpol or court order the website owner which automatically rids it of all search engines as well (not like Kagi is showing tor site results). So all this debate about what search engines should do about harmful content online isn't even necessary from the start as the weight of these matters doesn't even have to lie on the shoulders of search engines to begin with as long as they aren't favoring/pushing the questionable content any differently then any other search result. Semi joking, but might as well setup a bot for further exemptions to search neutrality to respond "You've reached the wrong support group. Please forward your potential concern to Interpol and/or your local government to consider a take-down request of the specific website(s) in question instead of dragging neutral search engines into this, thanks"

                  I'm sure you know you won't be seeing the last of these sorts of discussions about Kagi come up in the future but I feel it's important to always keep in mind that for every person calling for an exemption to search neutrality, there are just as many Kagi users if not more like myself who don't want any slippery slope exemptions from the start, especially not when we're paying for it. Just look at this thread so far, for every 1 person who wants search exemptions to neutrality 2+ others respond with no. For all any of us know, a certain unknown % of users advocating for search engine exemptions could be working for Google and friends with an agenda at play from keeping people from using the great power that is quality search. You've already with such small size created better search results then a trillion dollar company, that's no accident. Again, thank you.

                    NoGoogle Thanks for taking time to make your voice heard. We are incentivised to stay (only) in the business of search, for as long as our users want us to.

                    Vlad

                    While I'm understanding of your position as a start up as well as the current relative impact this change would have, I think it's dangerous not incorporate what we've learned from giants into your operating philosophy. Reddit, Twitter, even Google all had to come to terms with the harm that an entirely hands off approach can have. Facebook being an outlier in that they just DGAF.

                    I think the question comes down to what you mean when you say things like "Kagi is a company created with the mission to humanize the web" and "we want to develop a refined search experience for sophisticated customers who value high-quality results"

                    Again, dipping into the philosophical perspective, what would Kagi's stance on misinformation be? If someone searches for "covid vaccine side effects" would you be comfortable presenting results that are not based on facts and evidence (e.g. that it makes your testicles swell massively)? Your current results are almost entirely for resources from health experts, and maybe that's because of the algorithm and if the algorithm started presenting misinformation you wouldn't be concerned about it so long as it fit your algorithmic criteria of "high-quality results."

                    I'm not trying to judge your mission or moral/ethical decisions here, but rather present a lens by which we can all judge what's relevant to the mission.

                    That leads me to these two possibilities for intent. As an aside, to HeartleafGames' point, I don't think it's possible to fully decouple the two, but I'm happy to focus on intent here.

                    1. Kagi as a company currently chooses to tune the algorithm to reduce the spread of (harmful) misinformation/sensational results

                      • Given this scenario it becomes patently ridiculous to say that Kagi doesn't "control what their users see" or that it's possible for search to be "neutral" and I think a suggestion like this where you present a harm reduction option is valid and should be given serious moral/ethical consideration, although not necessarily a place on the roadmap.
                      • As I touched on at the start, the impact of such an option is likely to be extremely small with only a few thousand DAU, and in fact, I think that it's equally as valuable for people to have access to high quality knowledge on methods of suicide. Pretending like it doesn't exist helps no one.
                    2. The current results are a product of purely algorithmic choices (e.g. "the presence of ads or trackers") that happen to result in a set of high quality results

                      • Don't do it, there is no step 2.

                    Again, I'm not intending to pass judgement on the choices you make at Kagi, you're working on a hard problem, but I think it's important to work from a foundational level when discussing these sorts of things

                    For what it's worth, I'm happy to pay for Kagi either way. I think the results generally -are- significantly better from a moral perspective, even if largely by virtue of decoupling monetization from information


                    Related, are there any studies that show that this results actually helps people in these cases?

                    Probably not this specific use case, and the only real reference I can think of is from Malcolm Gladwell's Talking to Strangers about the idea of "coupling" where the decision to commit suicide is strongly "coupled" to access to methods. It's not strictly relevant, but it's what I thought of

                      dysiode Our stance is that we are in the business of search, which means that we are not interested in being the "arbiter of truth".

                      This means current Kagi results will be a result of purely search-related, algorithmic choices, as far as we are concerned.

                      Rather then deciding for the user what the 'truth' is, we built powerful tools like lenses and domain whitelist/blacklisting so that each user can tune the results any way they see fit.

                      I do not even know what the "truth" is in my house, let alone in the wild wide world (this is to say that companies with billions of dollars in cash and thousands of smartest people in the world still fail to do this properly, how can we expect a 10 person startup to get 'truth arbitration' right?).

                      The only thing that is principled, scales and avoids the slippery slope i, sticking to (just) being in the business of search.

                        I very much appreciate @Vlad 's statements here. They seems to reflect an intent to create a search product that assumes the user is basically a responsible adult who can competently weigh surfaced results on their own merit.

                        Some here are effectively arguing that many Kagi users are incapable of handling the internet without qualified guidance, and that surfaced search results might need to come sufficiently Clarified and Annotated so users always know the socially correct way to digest them. While this mentality might be appropriate for companies who are in the business of managing public opinion (e.g., Twitter, Google, etc), it seems completely out of place here. Am I not my own master? Figuratively speaking, am I not allowed to drink my coffee without cream and sugar? If I am not, I would still say Kagi's role isn't to save me from myself.

                        As an aside (and because this thread started with such), Death seems to me as valid a choice for any responsible adult as Life is—one we reflect on at least subconciously in choosing to live the way we best know how—and I somewhat resent the meddling of do-gooders with their tiresome focus on "harm prevention" when finding quality information on suicide may actually be completely pertinent to certain people at certain stages of their lives (e.g., the terminally ill in countries where assisted suicide is not available). I doubt many here would advocate adding a widget to steer those seeking information on abortion away from having one, and I honestly don't see a great deal of difference between this and that.

                        @Vlad If I search for 10 + 10 I get a calculator widget. And when I search for a conversion from inches to centimeters I get a similar thing. Why are you trying to police my search results instead of just giving me the best information? How do I know you're not just trying to decide for me what 10 + 10 really means?

                        Obviously I am just using this to make a point. You already have a precedence for presenting something before the search results. I would argue that presenting a suicide-prevention widget is more useful to the user in most cases than not, when someone is searching how to kill themselves. It doesn't mean you have to modify the search results - just display it as information before the normal results.

                          hitch In that case (also in the case of weather, stock or wikipedia widget) it is complementary to the results being shown in a way that accelerates user finding the correct answer. The same information will also be present in the results themselves, just requires few extra clicks.

                          However in the case of suicide widget discussed here, showing helpline would not accelerate finding the information user is searching for and the widget is motivated by a moral bias.

                          Widgets themselves are not problematic, it is the motivation behind them that is the subject of this discussion. As long as it is strictly search-relevancy related, we are OK with having widgets.

                          To expand on the oversimplification of "10 + 10", if you were to ask "suicide hotline number" It'd be great to show a widget with the contact information. However, injecting that moral solution into a question which asks the opposite based on what the team feels is the morally "correct" resource for the end user to view would be biased.

                          Staying wholly "neutral" in morality, politics, and all other high-blood-pressure inducing conversations will always be met with people on one side or another having issue with your "non"stance.