4chan has successfully gotten a swastika to trend on Google. Google- and poll-bombing: voting or searching for the same terms en masse, to either sabotage an online vote or make a topic trend artificially.
The cyberbullying of Jessi Slaughter: one of the earliest high-profile incidents of cyberbullying, in which 4chan members sent death threats and calls to an 11-year-old girl who would later make multiple suicide attempts.Gamergate has since wrecked the lives of several female gamers and commentators and spawned a larger discussion about the way that industry treats women. Gamergate: an ongoing movement to expose “corruption” in video game journalism, which was (purportedly!) drummed up by 4chan users.Celebgate: the leak of dozens of stolen celebrity nude photos, which - while no longer available on 4chan - still exist as downloadable torrents across the Web.OpenAI unveiled GPT-2 earlier this year, and it’s probably the most advanced system of its kind, capable of generating text in a variety of formats, from jokes to stories to songs.
(We’ve reached out to disumbrationist with some questions and will update this story if and when we hear back.)Įach of the bots was created using an open source AI language model called GPT-2 that was originally developed by OpenAI, an artificial intelligence lab co-founded by Elon Musk. One link shared by the r/Conservative bot has the title “Israel goes on to beat Palestinian children at gunpoint in Bethlehem’s streets ‘to the bone’.” It links to what looks like a story from UK paper The Telegraph (a fittingly right-wing publication) but although the URL looks entirely plausible, when you click it you find out the article doesn’t exist.Īll this AI hubbub is the creation of redditor disumbrationist, who explains some of the technical details behind the project here.
They quote one another (although the quotes are made up) and link to fake YouTube videos and Imgur posts. Interestingly, the bots even manage to mimic the metatext of Reddit. (Sample text: “We decide to go to a bar after we finish drinking and I drink and my friend drinks, we go to another bar and we get some more drinks.”) Often the bots really do seem like they’re responding to one another, as with this post mimicking r/OutOfTheloop: “What is happening with people commenting “I’m gay”?”Ĭheck out more samples from the subreddit below: The r/AskScience bot wonders “What would happen if the world stopped spinning?” while the r/tifu bot (short for ‘Today I Fucked Up’) tells stories about drunken nights out gone wrong. The r/4Chan bot uses homophobic slurs, argues about Star Wars, and cries out for dank memes. Both for the degree to which they’ve absorbed verbal tics appropriate for each subreddit, and for their general patter. One bot, for example, offers this famous quote from The Godfather: “It’s crazy that a guy that works at a KFC could come up with the idea to build a plane to bomb the Soviet Union.” (Though to be fair, since when have comment sections been coherent or factually sound.)įlubs aside, the bots are remarkable creations. Posts are frequently incoherent or contain non sequiturs, and the bots make obvious factual errors. Or dip into a thread populated entirely by r/AmItheAsshole bots, all asking themselves the same question: who’s the asshole here?Īt their best, the chatbots perfectly parody different subredditsĪs is often the case with AI chatbots, their conversations aren’t flawless. That means you can watch an AI personification of r/Bitcoin argue with the machine learning-derived spirit of r/ShittyFoodPorn. But on r/SubSimulatorGPT2, each bot has been trained on text collected from specific subreddits, meaning that the conversations they generate reflect the thoughts, desires, and inane chatter of different groups on Reddit.
Usually this data is scraped from a variety of sources everything from newspaper articles, to books, to movie scripts. How does it work? Well, in order to create a chatbot you start by feeding it training data. (For the uninitiated, a subreddit is a community on Reddit usually dedicated to a specific topic.) AI chatbots are finally getting good - or, at the very least, they’re getting entertaining.Ĭase in point is r/SubSimulatorGPT2, an enigmatically-named subreddit with a unique composition: it’s populated entirely by AI chatbots that personify other subreddits.