My Best Friend Is A Monkey, How Can This Be? Terrible Neural Network Riddles

Can a neural network write good riddles? Maybe, but I sure didn’t prove it with this post. If you have a taste for the terrible, read on:

The too obvious:

The What Is It Riddle
What’s the first letter of the word ‘E’?

Answer: E.

The Why Does It Look Like A Man Riddle:
Why does it look like a man’s head?

Answer: It might be a man’s head.

The mysterious:

Continue reading “My Best Friend Is A Monkey, How Can This Be? Terrible Neural Network Riddles”

One Thousand And One Neural Network Nights

Samples from the GPT-2 neural network are generally short – a few paragraphs – because it can only write 3 or 4 paragraph of text in a single sample. (This is vastly better than earlier networks like char-rnn).

I wanted to try out GPT-2 creating a single unbroken sample by feeding each sample into the next over and over again, on the vanilla GPT-2, just to see where it went.

I discovered that the bane of this neural network is a list. With the default 345M model almost every single run ended in an infinite list (Bible verses, Roman Numerals, vaguely sequential numbers.) In between there were a few megabytes of climate speeches, but everything ended in numbers staitons. May do a ‘absurdly long lists’ posts later. But if you need to defeat an evil robot powered by the GPT-2 neural network don’t go with the classic approach of “This statement is a lie.” Start a list because once a neural network stats counting IT CAN NOT STOP.

I still wanted to try a longer sample. One Thousand And One Nights is sort of a single story, sort of a series of short stories. Meandering narratives, asides, stories inside stories – story told by design to never end – it already sounds a lot like what you get out of a neural network! So I began with first paragraph of One Thousand and One Nights.

Continue reading “One Thousand And One Neural Network Nights”

Scenes That Never Happened In The The Web Serial WORM

Worm is a web serial written by Wildbow. You can read it on
parahumans.wordpress.com. If you have not read Worm turn away now because even silly neural network bits will spoiler you.

If you insist on reading anyway for the love of God at least only look at the first half the blog post. The second half contains paragraphs of text from Worm itself – a greatest hits of spoilers.

These are scenes generated by the GPT-2 neural network. The first section has unconditional scenes – where the network is just told ‘write something’; the second section has prompted scenes – where the network is given an existing Worm scene and asked to complete it.

Continue reading “Scenes That Never Happened In The The Web Serial WORM”

I Forced A Bot To Watch Over 1,000 Hours Star Trek Episodes And Then Asked It To Write 1000 Olive Garden Commercials.

I wish I could tell you I had a good reason why.

Anyway let’s use the GPT-2 345M model to recreate the viral (but not real) “I Forced a Bot” tweets that I named this site after… but with a trained model based on Star Trek.

I was going to do many different training materials and cover more of the original viral tweets, but the Star Trek Olive Garden Commercial samples are just killing me by themselves. I honestly think I could do nothing with GPT-2 but generate Olive Garden commercials from different models and never get bored. It deserves it’s own post!

Continue reading “I Forced A Bot To Watch Over 1,000 Hours Star Trek Episodes And Then Asked It To Write 1000 Olive Garden Commercials.”

Ensntalice! What Would a ‘True Steake’ Spell Do? Prompted D&D Spells

When I was working on the first post about D&D spells from a neural network I generally let the network run wild and create the spells from nothing, which also created the spell names. But I did try ‘prompting’ the network with the spell names from @JanelleCShane’s neural network D&D spell names post and asking it to fill in the rest of the spell information.

I made a ton but they were a bit harder to skim through since you can’t rely on a catchy spell name to jump out. I was going to make better sifting tools but figured I’ll post what I’ve got for now. Thanks to my friend Sam for picking out some good ones. Be warned a lot these samples were from a terrible model and it went way off the rails and just generated absolute nonsense — but it also gave us such delights as a spell that is just “No,No,No,No” over and over.

Continue reading “Ensntalice! What Would a ‘True Steake’ Spell Do? Prompted D&D Spells”

Dungeons and Dragons Spells from a Neural Network Are Bonescrackling

In my last post I trained GPT-2 to write Star Trek scripts. Lately I’ve been experimenting with Dungeons and Dragons spells with some amazing results.

I like tabletop roleplaying material for generation because tabletop rules often require a good faith effort at human interpretation anyway. That same effort can make some sense of the silliest of machine generated rules.

I picked out a bunch of of my favorites and there are a lot more spells at the bottom of this blog post for anyone who wants to hunt for some more good stuff. Also I’ll be posting more on my twitter.

Continue reading “Dungeons and Dragons Spells from a Neural Network Are Bonescrackling”

You Don’t Need to Force a Bot to Watch Star Trek to Write Its Own Episodes

In my last post I trained OpenAI’s neural network on the Star Trek: The Next Generation and Deep Space Nine scripts and had it write some its own.

I used a version that GPT-2 that lets you feed it additional training material. This is why the output looks like a TV script. But even without fine-tuning GPT-2 knows a whole lot about Star Trek because GPT-2 was trained on a huge sample of the internet, and internet knows a lot about Star Trek. So what happens if you ask regular GPT-2 to predict the next word in the sequence This episode of Star Trek: The Next Generation opens with a scene where?

This episode of Star Trek: The Next Generation opens with a scene where The Enterprise's fuel cells are destroyed, and the crew and crew of the Enterprise are not rescued. As they are being evacuated, Picard uses the transporter device to activate the emergency access and communication system, and a rescue crew arrives to rescue the crew. 

The transporter device is activated, and the crew is saved. Later on, in a flashback scene, Picard describes how the Enterprise was going to be destroyed: "We had to determine what we could do with the fuel and the crew and the power that we had."

The Enterprise is finally able to reach the planet, and the crew and crew escape. While the power is still available through the emergency access, the power is being used to launch an attack on the Klingon capital.
Continue reading “You Don’t Need to Force a Bot to Watch Star Trek to Write Its Own Episodes”

I Forced a Bot To Watch Every TNG and DS9 Episode and Write One of Its Own

You may have seen “I forced a bot to watch” posts where someone claims to have “forced a bot” to read or watch hours of video and written it’s own script. Those weren’t real but in recent history it has become possible to do it for real in a pretty convincing way with OpenAI’s GPT-2 system.

This is an unbroken, unedited sample of GPT-2 ‘fine-tuned’ on all the TNG and DS9 scripts. Note that television scripts have have distinct formatting and style and all of that is copied perfectly by the bot. It even *almost* gets understands page numbers (in the first example – 28, 28, 28A).

Continue reading “I Forced a Bot To Watch Every TNG and DS9 Episode and Write One of Its Own”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑