HeartleafGames
This is getting a bit philosophical but warants further explaination.
I am not claiming Kagi's output presents objective truth (nor is such thing possible for various reasons), but that it does not have any implicit bias built by us, that is not search quality based. For example, we are open about having an implicit search quality-based bias: towards sites that have lots of ads and trackers, downranking them in results.
What we do not have is moral, political, religious, social or any similar kind of bias which would needed to be explicitly coded in the algorithms and would represent moral, political, religious, social views and values of the company. The reason why we don't is that we are not in the business of advocating those moral, political, religious, social views but in the business of search.
We simply let the results happen based on search quality factors alone (note that since Kagi gets some of its results from third parties; these may inherit such unwanted biases from those 3rd parties, but this is out of our control and by using data from multiple sources we hope to negate/balance out those factors to some extent).
Adding a widget for suicide prevention hotline on top of 'how to kill yourself' result would be manifestation of one such bias - in this case moral - meaning, human interference into search results for moral reasons.
While I can sympethaisze with that as a human, I am also aware that making a precedent with any non-search quality bias would lead Kagi down the slippery slope of endless policing of results. And we already know that it is impossible to do well at any scale, largest tech companies with huge resources are the prime example of failing to do it properly. So why expect Kagi, a 10 person startup, to do that well?
The very basic question of what do you police (or don't) next is not answerable. For example 'how to kill an animal?', 'how to rob a bank?', 'how to hack a computer?' - although objectively less impactful than the original example, would eventually draw attention from sufficently large groups of people who will passionately call us out on not doing something about these queries on the same moral grounds. And again this never ends, you end up being in the business of pleasing everyone. Good luck with that.
Thus the best option for us is to simply refuse to make the first precedent no matter what the pressure is and stick to search being search. Perhaps one day Kagi may become your 'assistent' with personalised biases, but for now it is just a search engine.