112

@Vlad only said that they don't want to introduce biases. Yet your response, @dorkbutt, includes things like:

>where they have a voice and if they should even be allowed to have as prominent of a voice
>result from a blatant propaganda website
>credence to the words of outright fascists
>transphobia, homophobia, and all of the sorts in the face of rising fascism
>being a bootlicker to the government
>these are people that shouldn't even be taken seriously if they have problems

There are a lot of opinions and biases in your post. You obviously have a right to do that, but it seems to me like this is exactly something that the @Vlad wouldn't want to have — and it would be a must if it came to censoring results.


dorkbutt It is a gargantuan task to ask parents to educate their kids on what the internet is as there are a multitude of topics to cover.

Yes, being a parent is not easy. And:

dorkbutt The web exists in a great state of flux and to properly moderate it in its entirety for the intake of someone's children is next to impossible.

So it's not something to be expected to be done right by a search engine. Bad things (that, needless to say, depend on your definition of bad) will always remain, so as a parent you should rather still try to prepare your kids for those. I believe it is parents' responsibility to prepare their children for what they might encounter in the future, not to raise them in a cleanroom and in a bubble where everything is fine. In my eyes, this makes them stronger and more independent, and strong and independent thinkers are something I would like to see on the rise.

5 days later

Vlad Note that this is not the same as introducing a moral or political bias, as filtering malware sites directly and indisputably improves search results for the user, which we are in the business of.

I think this was mostly a reach on my part, but I can definitely see an avenue being considered where there needing to be a line decided for what is considered malware and what is not. This still harkens back to the original slippery slope fallacy, and there could be third-parties you could consult to get this information. But I think it reasonably extends. Should a torrent magnet aggregation website be considered malware since it directly supports the unauthorized access of intellectual property? How about websites that allows for downloading of other pieces of software labelled as malware for the goal of education? There could be users who end up using that website to take advantage of systems that have yet to cover vulnerabilities that those pieces of software take advantage of. This could just be hyperbole, but I want to hopefully allow this to illustrate my original point.

@Vlad For any search engine that does that in the case of any current conflict, do they also pick sides in 26 other armed conflicts around the wrold where people are dying right now? https://www.cfr.org/global-conflict-tracker

I would say no as it is not the business of a third-party to directly intervene with another party's war. But there comes a time later that those being subjugated ask for help, where sides can be taken, and a conclusion can be reached. In particular, we can look at Ethiopia in addition.

Ethiopia is looking to sue Meta for roughly $2bn for posts on their platform potentially inciting hate to the point of assassination and inflammation of their civil war. This serves as a clear example that when not taking a stance, there are consequences being dealt.

And then the ongoing battle of behemoths that happens to be the Russia Ukraine conflict, by behemoths I mean Russia and ultimately its allies up against Ukraine and its allies, for each side fuels those directly involved. Why should Kagi be in a position to passively allow the dissemination of misinformation for a conflict that would inevitably harm its bottomline if a particular side would win? Any country with aspirations such as Russia's would not allow for a web to be as open as it is for us that Kagi takes advantage of with its robust feature-set.

@Vlad Did they carefully study all those conflicts to decide which side has higher moral ground so they can censor results from the other side? Or some human lives matter more than others?

I have to say, human lives do matter more than others when the human lives in question are seeking to eradicate others over something as arbitrary as state borders or identity.

@briskroad1698 There are a lot of opinions and biases in your post. You obviously have a right to do that, but it seems to me like this is exactly something that Vlad wouldn't want to have — and it would be a must if it came to censoring results.

But why should opinions and biases be immediately dismissed as that is what differentiates people from one another? Is it even a possible achievement? It is an opinion that information should be totally free to be disseminated, even when there is undeniable harm involved, so to make no choice is still making a choice. I'm just trying to argue from the perspective that not everything should necessarily be readily accessible as they are today. It has led to the rise of fascist movements and so there should be responsibilities shared by those who stand in a position to curate this content.

Edit: I don't mean to make it seem like I want to sap all autonomy out of parents, but to serve as a helping hand as many of them barely have a concept of the internet while their kids are going to be growing up in one that skirts around the era that was its birth. And while the internet matures or continues through its maturation at a rapid pace, these kids will not exactly have a better idea of the internet and so help will remain welcome.

15 days later

@Vlad I immediately subscribed to Kagi after reading Vlad's replies in this thread. I don't want my search engine weighing in on how I should (or should not, in this case) live my life. I already have enough people trying to tell me what to do everywhere I go, and their advice is perfectly terrible; it's a relief to be able to use a service which isn't attempting to change who I am or what I believe. If Kagi expresses an (implicit) opinion on suicide then I won't trust anymore that its results are tamper-free.

    Nigel Thanks, we are happy to treat you like an adult!

      Just a miserable thread that has been cursing my email inbox for almost a year now.

      I wanted to believe there were tech companies with a more honest accounting of their incredibly important role in society, and the responsibility that entails.

      I know that my prescriptive, woke moralist opinion that making the most nominal effort to reach out to one who might be at a low point is just going to make eyes roll. I'm too exhausted to be angry about it anymore.

      Good luck with the libertarianism, Kagi. Please do not bother replying to this; I want nothing less than to see this wretched discussion again.

      15 days later
      • [deleted]

      This thread is absurd.. I can not believe the company's stance here...

      As I stated previously, we are a search company. We are not interested in introducing any sort of biases in the results, including moral.

      What a ridiculous point of view. The entirety of a search engine is bias! How else can you determine relevancy? And what is more relevant to someone feeling this way than how they can get help?

      4 days later

      Because I feel it appropriate to further accentuate my stance, I wish to share some media that harkens back to the point of curation. Lines must be made, for they already exist. It is important to draw them elsewhere so to make sure those who are disenfranchised are able to have a voice. To think technology has the magic of objectivity is a misleading ideal as tech is predominantly built by cis white men. Because of this, the technology we use will inherently have this bias. We see this with surveillance systems, we see this in the current iterations of language models, and more. I do hope there is a turn towards actual responsibility being taken, or else you will be giving platform to the wrong types of people while alienating the powerless.

      [Mastodon Thread]
      [Mastodon Post]
      [Mastodon Post 2]

      [Link to original TikTok]

      [Original post with picture description]

        dorkbutt

        as tech is predominantly built by cis white men.

        Godwin's law for the 21st century. Time to pack it in fellas. Absolutely, mind-numbingly ridiculous reply, stance, and worldview.

          The assertion that products built by white men are inherently broken is so racist and intolerant that allowing @dorkbutt to continue his abuse of this platform by spreading such hate does a disservice to the Kagi community. His only purpose here is to destroy a good product and deprive its users of their enjoyment of said product.

            • [deleted]

            A lot of talk about ideals in this thread but the bottom line, I think..

            1. Kagi is not providing a deterministic search algorithm. They are not selling an Elasticsearch competitor. The expectation of a modern search engine user is to determine the actual needs and intentions of the user based on their search input and match-make that with relevant live content, and continue to adapt this service over time.
            2. The argument that providing a hotline number widget is somehow detracting from the search results below any more than any other widget which may be displayed based on the query is quite questionable.
            3. The argument that providing the number doesn't help seems to be objectively false. Google partnered with the Department of Health and the National Center for Mental Health to design their current behavior. It's pretty well established belief in the mental health field that providing any amount of deterrence, alternative options or amount of hope at a critical time will be a deterrent for at least some people.

              [deleted] Customers who want censored and otherwise carefully manipulated search results already have many services available to them. There is no evidence that Kagi's subscribers expect Kagi to behave more like its competitors, and the fact that Kagi has been successful while doing things differently is strong evidence that your claim is actually false. Pointing out that the competition partners with the US government in deciding what content to show further demonstrates the need for Kagi to maintain its current stance with respect to content neutrality.

                • [deleted]

                • Edited

                Nigel It is not possible to return useful search results without manipulation. It’s not 1990 anymore - Kagi isn’t doing simple word counts in documents to determine relevancy. It’s not possible to return something useful in a sea of drivel, misinformation and AI generated garbage without some basis for truth (“bias”). Specific sites are chosen to be promoted in search ranking as an expected behavior. The widgets themselves are a “careful manipulation” of both content from an impartial source with a presentation altered by Kagi, but they seem like additional context specific information that don’t need to “censor” the results below. I don’t think anyone is asking for the actual search results to be changed.

                It sounds like many of you want a simple and transparent deterministic word count search across all publicly available resources found by the crawler, but the result of that is going to be entirely unusable and unsellable.

                It’s also funny you see Kagi as a savior to manipulation and censorship when the bulk of their search results come from Google.

                Absolutely, mind-numbingly ridiculous reply, stance, and worldview. chexx

                I would love to hear what the alternative should be then, and why I should get off my high-horse. Nazis are a reality, they exist today in a pathetic but yet potent form so I believe it is necessary to bring up the case of Nazi content being disseminated by Kagi.

                his abuse of this platform by spreading such hate does a disservice to the Kagi community Nigel

                What hate am I leveling and against who? All I have been touting is that there is more to this world than just the optics produced by technology. The biases produced by technology being made by predominantly white men is bad, but I never argued whether the white men behind those developments are bad people.

                EDIT: and what abuse?? I have no say in this whatsoever, but I wish to remain a nuisance as I believe this to be an important conversation to have. Vlad seems adamant on his viewpoint, which is disappointing to me and others.

                I know it is difficult, but we'd like to keep the feedback forums emotion-less. So please focus on having a merit based discussion..

                This is a sensitive topic which is why I feel we as a company need to have a clearly articulated and well argumented stance. I feel I have given it multiple times throughout this thread.

                The last few posts have certainly strayed away from being merit-based. I am locking this discussion for a week (if I forget to unlock it please remind me on Discord) to let everyone cool-off so that we can hopefully continue a merit-based discussion in the future.

                  4 months later
                  7 days later

                  Vlad

                  So please focus on having a merit based discussion..

                  I can understand how you feel, that a lot of the discussion is not actionable for a company, so I made an account to talk about this, as it is important to me. I'm currently a free user, and I was planning to get a subscription once I'm out of the free 100 searches. This, however, is actually kind of a big roadblock.

                  I don't want to have to use my last few kagi searches on finding a new search engine.

                  I've personally had to intervene in the prevention of self-harm, and I have strong feelings about what individuals and groups can and should do to help protect each other. We can provide mutual support in this small way and possibly divert someone who WANTS diversion.

                  The hard facts, from a corporate perspective:

                  • Search results that contribute to your users' deaths will reduce annual revenue
                  • Services with a user experience that includes facilitating the death of the user have inherent anti-user bias
                  • Not including suicide diversion information can create the perception that the company doesn't care about users in crisis

                  And in a less corporate thought pattern:

                  • Many users seeking information about self harm WANT someone to intervene, or to stop them
                  • Adding a diversionary note is good UX for those who want intervention, but not detrimental to those who do not, as it can simply be ignored and the desired content obtained
                    • A special widget doesn't change the actual search results
                    • An appropriately designed widget can be made totally distinct from the search results (this is already the case for Kagi widgets IMO)
                  • If the idea of seeing lifesaving information is abhorrent to some users, a configuration can be added to disable diversionary and warning notes

                    andrea Just to back @andrea's important point up, I have collected a small selection of academic resources on the subject. While I am a (PhD Student) Researcher, suicide research / prevention is not my field. Hopefully someone who does have experience in the field can chime in here. Despite this, I have done some digging and found some scholarly sources that explain the problem and could be useful for context.

                    Mann JJ, Apter A, Bertolote J, et al. Suicide Prevention Strategies: A Systematic Review. JAMA. 2005;294(16):2064–2074. doi:10.1001/jama.294.16.2064

                    Overview of suicide prevention and interventions. Relevant quote: "The Internet is of increasing concern, particularly the effects of suicide chat rooms, the provision of instruction in methods for suicide, and the active solicitation of suicide-pact partners."

                    Durkee, T.; Hadlaczky, G.; Westerlund, M, et al. Internet Pathways in Suicidality: A Review of the Evidence. Int. J. Environ. Res. Public Health 2011, 8, 3938-3952. doi: 10.3390/ijerph8103938

                    Open access. An overview of how the Internet relates to providing pathways towards and away from suicide. Interestingly points out that searches like I want to kill myself etc resulting in suicide prevention resources did indeed produce a net outcome of fewer people dying by suicide.

                    If anyone here more experienced in the field discovers issues with the resources I link to here, please inform me and I will do my best to improve the accuracy of this comment.

                    I implore @Vlad to reconsider on this specific matter. Providing suicide prevention / helpline information in a widget for relevant searches can and does save lives, and we have the data to prove it.

                      @andrea @sbrl

                      It is only fair to ask you to read the entire thread before contributing as your suggestions do not address concerned raised at the very least here:

                      https://kagifeedback.org/d/865-suicide-results-should-probably-have-a-dont-do-that-widget-like-google/2

                      https://kagifeedback.org/d/865-suicide-results-should-probably-have-a-dont-do-that-widget-like-google/11

                      The issue is not narrow at all and any solution we implement needs to be principled and scalable. So far, the only such solution is for us to not make a precendent and do not interfere with users's search results. This is both a principled and scalable position.

                        9 days later

                        Chiming in to say "Good thread". I think there's good points being made on both sides (and also bad points on both).
                        I don't really have skin in the game, but @Vlad keeps referring to customers as "adults" while Kagi also offers a family plan. The advertising mentions "strict content filters to ensure children are not exposed to harmful content" (src) which seems almost like Kagi is taking a moral stance that children need to be protected more than adults?
                        In the end Kagi as a company can do what they want and us users can "vote with our dollars" 🤷.

                          MomentumBuffet

                          The advertising mentions "strict content filters to ensure children are not exposed to harmful content" (src) which seems almost like Kagi is taking a moral stance that children need to be protected more than adults?

                          These are controlled by parents and can be turned off. Sane defaults apply (safe search on).