Robots.txt in recent years has been redefined so it no longer means „do not list on searchengines“ but „do not crawl“. So if an engine acquires the URL of a site in any other way (most commonly by links pointing towards it) it might still decide to show it. Disallowing search listing is what the „noindex“ flag is for.
It‘s weird (especially as since the removal of official support for noindex flags in robots.txt itself you basically have to choose being crawled if you want NOT to be listed so that the crawler can pick up your noindex flags) but that‘s the way Google and Bing single-handedly redefined the standards, everyone else basically has no choice but to follow it.
Filtering out search engine results is still a good idea probably, but has nothing to do with robots.txt but should probably be hardcoded