Generally, this is something we can't prevent without some nasty hacks on our end - and even then it will have limitations.
The problem is we use mutliple upstream sources, for sake of example let's keep it at two - A and B. We make a single API call to each, mix the results, and that's your first page. When you request the second page, we repeat the API calls with the apropriate pagination parameters, and then append the new results to your page.
Result foo.com might be on Page 1 of API A but then show up on Page 2 of API B. Since we do not store the results, we currently do not know whether you already saw foo.com or not. We could check our cache, but the cache is necessarily very short - meaning if you waited long enough before hitting "more results", we would have no cache to deduplicate against. Finally, we could write some JS to deduplicate it client side, but this is not that great either.
It would be great to fix this, but the low impact and awkwardness / lack of any kind of robust fix means we can't promise anything for now.