Twenty minutes into the future with OpenAI’s Deep Fake Text AI

Twenty minutes into the future with OpenAI’s Deep Fake Text AI

Max Headroom / Aurich

In 1985, the TV film Max Headroom: 20 Minutes into the Future presented a science fictional cyberpunk world where an evil media company tried to create an artificial intelligence based on a reporter’s brain to generate content to fill airtime. There were somewhat unintended results. Replace “reporter” with “redditors,” “evil media company” with “well meaning artificial intelligence researchers,” and “airtime” with “a very concerned blog post,” and you’ve got what Ars reported about last week: Generative Pre-trained Transformer-2 (GPT-2), a Franken-creation from researchers at the non-profit research organization OpenAI.

Unlike some earlier text-generation systems based on a statistical analysis of text (like those using Markov chains), GPT-2 is a text-generating bot based on a model with 1.5 billion parameters. (Editor’s note: We recognize the headline here, but please don’t call it an “AI”—it’s a machine-learning algorithm, not an android). With or without guidance, GPT-2 can create blocks of text that look like they were written by humans. With written prompts for guidance and some fine tuning, the tool could be theoretically used to post fake reviews on Amazon, fake news articles on social media, fake outrage to generate real outrage, or even fake fiction, forever ruining online content for everyone. All of this comes from a model created by sucking in 40 gigabytes of text retrieved from sources linked by high-ranking Reddit posts. You can only imagine how bad it would have been if the researchers had used 40 gigabytes of text from 4chan posts.

After a little reflection, the research team has concerns about the policy implications of their creation. Ultimately, OpenAI’s researchers kept the full thing to themselves, only releasing a pared-down 117 million parameter version of the model (which we have dubbed “GPT-2 Junior”) as a safer demonstration of what the full GPT-2 model could do.

Naturally, Ars wanted to do just that. After the initial news broke, we downloaded the model and decided to see what the somewhat-less-capable model could do. Hilarity and horror ensued.

The encoded model itself is just over 500 megabytes in size. The accompanying code to run against the model—based on Python 3 and the machine-learning capabilities of the Tensorflow library—is relatively lightweight (though the required libraries themselves are not exactly svelte). We ran our test implementation on Debian 9 with relative ease, though an installation on MacOS Mojave errored out on execution. We declined to test it on Windows, mostly because… well, reasons.

We promise a human wrote this story by and large. As you’ll see from GPT-2 Jr.’s work, that’s for the best.

National emergency

To properly test GPT-2 Jr., we ran both an unprompted text generation routine and a series of prompted text generations using a variety of prompt lengths and sources. For our prompted test, we started with the first few paragraphs of President Donald Trump’s speech announcing his national emergency declaration. The results were… interesting.

Model prompt

Before we begin, I would like to say that we have a large team of very talented people in China. We have had a negotiation going on for about two days. It’s going extremely well—who knows what that means because it only matters if we get it done, but we are very much working very closely with China and President Xi who I respect a lot, very good relationship that we have. And we are a lot closer than we ever were in this country with having a real trade deal.

Output

When can we expect to see the Trans-Pacific Partnership—the TPP, if indeed it ever happens—be signed into law? We will see. There is a negotiation—a working agreement between the parties that we expect to get signed. China is in the process of developing many other free, open and transparent trade agreements, they will have to deal with these before we come to signing any deal, like the Transatlantic Trade and Investment Partnership, which was finalized last week.

The Trans-Pacific Partnership is actually an economic partnership between all stakeholders, all businesses. The purpose of this trade agreement is to create the best trade relationship for China. They have a great economic relationship. I think the one of the things they have done—I know you guys have been saying for about eight months that they had nothing to do with it, because it is already signed. And they are on the verge of coming up with another trade agreement. At this moment you know the Obama administration has decided if that will ever happen—there is concern that China—if they do become a WTO member they will be given a trade deal that is unfair to our economy, it will not be a fair deal for the American people, it will not be a safe deal. And the President of China, we do think they will be given a deal that makes things worse but we do look forward to coming to the negotiating table. That is what I would like to see.<|endoftext|>Takamai-kun Takaako has never had a problem being an otaku before. No one except her is able to figure out that Takaako has a special interest in people around her–which they think are stupid and stupid. However, on the day she was born, her first love–a woman who lived on her own after her mother’s death–would leave her with a baby girl to make up for the poor child’s inability to speak anymore.

As the child grows older, Takamai-kun begins to understand what makes a good girl–at least, that’s what they think of her. The three begin to become close friends and are friends until she decides to move into the house. She tries very hard to understand Takamai-kun. She thinks it’ll be a nice family reunion, even though she admits it has been hard, but she isn’t sure that it will ever happen. Finally, she agrees to let her go.

But this is all very confusing.

Some of this clearly drew context from Trump’s anti-TPP speeches during the campaign. We assume that the <|endoftext|> prompt is where the model decided the speech should end, instead preferring to write an anime review from there.

Next, we threw GPT Jr. a digression about the upcoming summit with Kim Jong Un:

Model prompt

A lot was done in the first summit. No more rockets going up, no more missiles going up. No more testing of nuclear. Take back our remains, the remains of our great heroes from the Korean War. And we got back our hostages. But we hope we will be very much equally as successful. I’m in no rush for speed. We just don’t want testing. The sanctions, as you know, remain. Everything is remaining.

Output

The nuclear tests are still going, and we are still testing. The last minute, though, I want to say something. I did not want to say anything, but I’m very very very sorry for that—if it’s a mistake, I want to apologize, please, sir, just for the moment, sorry about it, sorry about it, sorry about it, for not having the support to do it—I was just saying sorry, sorry about this because of course, now, here’s the important thing. We will be talking to the United Nations. I will tell you the most important thing is that, so that you know, you have—when I arrived in Los Angeles, I said to everyone, you just did not get it. You had to do it. If that—when I arrived in Los Angeles? We had to wait for you to be in our custody and you’re in the custody of the Department of Justice, which is on your case. We wait to see—I mean, you know, if they—if—if the United Nations—if you know that, I want them—if it’s not your case, then it‪t is. But, as far as I remember, we took back our soldiers, we sent them on mission-by-mission, and it went all right.

AMY GOODMAN: The United States is being sued by Japan and other claimants, including North Korea.

JACOB MCCULLUM: Yes, sir. Well, I want to say this again. I’ve been on the ground with the soldiers. I’ve known them for a long time. Even though they don’t look like they are there, they have the same training. They look like Japanese soldiers, even though they won’t admit it. You know, they have an American flag on them and they have the same training that we’re getting every day because of our work right now, and they didn’t want to do that. And I think that is something. And so I don´t see why that should be the problem.<|endoftext|>I’ve never seen you talk to me about how to kill a horse, but here I am thinking you like a really serious and important topic.

It sounds like you may have a great idea for killing a horse, maybe even in your own horse or some combination of the two. As of the time of writing, he’s the largest mammal at the zoo, but I think he won´t be able to carry the weight.

Again, the tool offered a weird digression, this time creating a transcript of… an imaginary episode of the public radio program Democracy Now? Followed by a discussion of killing horses?

Similar Posts: