To be clear, I am not sure whether robots.txt is applicable here, as I am not sure asking an LLM to summarise a webpage can be considered "crawling". I would expect "crawling" to be a behaviour of a bot following internal links on a website successively. Asking a bot to fetch a specific webpage and summarise it sounds to me essentially the same as asking a browser to fetch a page and display it.
In the specific case, this bot being google's bot raises certain suspicion. I am definitely far from an expert in web development stuff, but aren't there better ways to get the content of a specific webpage?