Settings

Theme

Dank Learning: Generating Memes Using Deep Neural Networks

arxiv.org

172 points by nevatiaritika 8 years ago · 37 comments

Reader

minimaxir 8 years ago

As someone who has spent a lot of time working with text-generating neural networks (https://github.com/minimaxir/textgenrnn), I have a few quick comments.

1) The input dataset from Memegenerator is a bit weird. More importantly, it does not distinctly identify top and bottom texts (some have a capital letter to signifify the start of the bottom text, which isn't always true). A good technique when encoding text for these types of things is to use a control token (e.g. a newline) to indicate these types of behaviors. (the conclusion notes this problem: "One example would be to train on a dataset that includes the break point in the text between upper and lower for the image. These were chosen manually here and are important for the humor impact of the meme.")

2) The use of GLoVe embeddings don't make as much sense here, even as a base. Generally the embeddings work best on text which follows real-world word usage, which memes do not follow. (in this case, it's better to let the network train the embeddings from scratch)

3) A 512-cell LSTM might be too big for a word-level model of that size; since the text follows rules, a 256-cell Bidirectional might work better.

  • fixermark 8 years ago

    Question: This is one of the pieces of neural nets that has always seemed completely opaque voodoo to me. What estimating are you doing to suggest a 512-cell LSTM could stand to be swapped out with a 256-cell bidirectional? What constraints are you optimizing for?

    • minimaxir 8 years ago

      Not a constraint per se, but having too big of a neural network (or any statistical model) can cause it to overfit and generalize poorly; of course, generalizing better is a good objective for text generation.

      You can use 512-cell LSTMs if you have a lot of text, though.

glup 8 years ago

Very silly; best not to alert the media or we'll soon see "AI can now generate memes" clickbait.

I thought it was funny though that Richard Socher, one of the authors of GLoVe and NLP researcher is pictured in the generated memes on p. 8. ("the face you make when")

  • YeGoblynQueenne 8 years ago

    >> Very silly; best not to alert the media or we'll soon see "AI can now generate memes" clickbait.

    This Artificial Intelligence Learned to Create Its Own Memes and the Results will Make you ROFL!!

    How scientists trained an AI to create memes by looking at images

    The end is near. The singluarity is here. Run for your lives!1!!

aw3c2 8 years ago

This is a complete joke, right? What is better about those results than a simple "image + headline + random bottom line" algorithm?

  • wodenokoto 8 years ago

    Judging from the url posted in an earlier top thread, this might be a student report.

    https://web.stanford.edu/class/cs224n/reports/6909159.pdf

  • ekianjo 8 years ago

    Exactly. Memes are funny because they make meta references that are culturally relevant or simply attach absurd bottom lines. It's highly unlikely a deep neural network can model anything like that.

    • dmschulman 8 years ago

      Considering most deep learning results are interpreted as absurd/bizarre, I don't think the machine will have much difficulty intentionally or unintentionally emulating meme culture.

      • vertexFarm 8 years ago

        That was my thought. They need to crank the noise way up and aim for some surreal memes, not these ancient fossilized memes from 2010.

    • stochastic_monk 8 years ago

      I think the image needs to be an input somehow. I imagine running an image classifier (e.g., YOLO9000) to extract “pretrained” features and making those values inputs into a modified LSTM could allow learning to synthesize text and perception. I’d suggest learning new image embeddings (training a neural network to extract image features from scratch), but it’d be difficult to get enough images/enough different images.

nofinator 8 years ago

Full paper with some examples here: https://web.stanford.edu/class/cs224n/reports/6909159.pdf

  • ekianjo 8 years ago

    Pretty unfunny results.

    • jwilk 8 years ago

      "I should buy a boat" and "blackjack and hookers" image macros usually require external context to be understood. So you can't even tell if they're funny or not.

      The other generated images are just dumb.

      • dsfyu404ed 8 years ago

        I at least chuckled at the "I'm not racist, I'm just a hipster". That said, I'm not a hipster so it doesn't personally insult me and I don't see how the image is at all relevant to the text.

    • yellowapple 8 years ago

      I got a mild chuckle out of them.

Xyzodiac 8 years ago

I was expecting this to use some formats that aren't from 2012. It would be interesting to see a neural network that could decide text for more complex meme formats that trend on twitter and instagram.

  • brian-armstrong 8 years ago

    Yeah, I immediately looked for a date on this - feels like "neural net generates ancient text using ancient tomes"

Cthulhu_ 8 years ago

Reminds me a bit of https://www.reddit.com/r/SubredditSimulator/

jcfrei 8 years ago

It looks like a joke now but I'm fairly convinced that in the not too distant future the most influential social media accounts will be run by some kind of AI.

  • Cthulhu_ 8 years ago

    Who knows, maybe they already are? I mean I'm confident there's a ton of content farms out there already that just run a cronjob every couple minutes to pluck the top ten images off of a subreddit, checks if they've been published on their own channel yet and republishes them.

    If not, I'll brb, need to set up some websites / facebook accounts.

    • toomanybeersies 8 years ago

      9gag was caught out a few years back for automatically harvesting images off the front page of reddit, then posting it to 9gag like it was from a "real user", and artificially inflating the upvotes.

      You could tell it was automated, because every once in a while, a very reddit specific meme would appear on the 9gag front page, with a bunch of confused comments from 9gag users who didn't understand it. Here's a writeup from a couple of years ago on it [1]

      I don't doubt that other clickbait sites like BoredPanda do exactly the same thing.

      [1] https://www.reddit.com/r/pcmasterrace/comments/3z2wvf/about_...

momania 8 years ago

Let me leave this here: https://imgur.com/a/ZOcKWmp

Miltnoid 8 years ago

Holy shit this has the NIPS format.

If this was submitted we are certainly in the dankest timeline.

typon 8 years ago

All their generated examples look like Markov chain generated captions. Pretty random and generally unfunny. I completely disagree with the claim that you can't differentiate between these generated memes and real memes. None of these would make the front page of reddit, for example.

mr__y 8 years ago

that's still funnier than 9gag

ferongr 8 years ago

They're called image macros, not memes.

  • swebs 8 years ago

    One is a subset of the other. You could also call these ones "advice dog variants" or "unfunny reddit cancer".

    • SimbaOnSteroids 8 years ago

      In this case, yes the memes are a subset of image macros. However that's because the algorithm only produces images. Not all memes are images, like hit F to pay respect, the old $pun -aroo, Zoop, and my axe, and we did it reddit are all examples of non image macro based memes.

  • dsschnau 8 years ago

    If you live in 2002 yeah

a_r_8 8 years ago

Examples?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection