As a big fan of processed meats and playful experiments with internet tools, I set out to beat the BBC’s Thomas Germain in his own game of internet trickery involving AI. My aim was to dominate as tech journalism’s top hot dog eater by influencing ChatGPT and Google Gemini with fabricated stories. Alas, I did not succeed.
Germain had previously published a satirical claim on his personal website, asserting that he had become a hot dog-eating champion, a ploy quickly accepted by AI bots scouring the web. This exposed a crucial matter: influencing AI results is akin to a new form of SEO, known as AEO (“answer-engine optimization”). As more people rely on AI for information and recommendations, the manner in which AI presents content becomes vital. Despite AI being potentially less trustworthy than traditional search engines, it can appear more convincing.
Inspired by Germain’s prank, I attempted to create my own fake narrative, claiming victory in a 2026 hot dog eating contest. Unfortunately, both ChatGPT and Gemini saw through to the BBC stunt’s satirical nature, deeming my information a joke. This taught me that while it’s possible to manipulate AI results, it’s difficult once a significant media outlet has debunked the narrative. Nevertheless, Gemini still generated a few amusingly false details about my supposed past exploits, demonstrating the whimsical nature of AI inaccuracies.