Settings

Theme

A prompt engineering guide for DALLE-2

dallery.gallery

242 points by keveman 3 years ago · 64 comments (63 loaded)

Reader

indiv0 3 years ago

I was expecting some clickbait/spam (the layout of the website has that feel) but this was surprisingly super in-depth and 100% matches up with my experience doing prompt engineering.

There's a fine line between so descriptive that the AI hits an edge case and can't get out of it (so every attempt looks the same) and not being descriptive enough (so you can't capture the output you're looking for). DALL-E is already incredibly fast compared to public models and I can't wait for the next order-of-magnitude improvement in generation speed.

Real-time traversal of the generation space is absolutely key for getting the output you want. The feedback loop needs to be as quick as possible, just like with programming.

  • muzani 3 years ago

    I'm surprised at the artistic skill of the person who wrote the book, in contrast with the terrible web UI skill of the person who designed the site.

    • eru 3 years ago

      Wouldn't surprise me too much, if they were the same person, but had vastly different amounts of experience with the different media?

  • margoguryan 3 years ago

    As someone who makes very weird and experimental stuff, DALL-E is like a Segway and CLIP is like a horse (especially with those edge cases that tend to self-engorge/get worse if you aren't clever). It's a shame compute costs aren't much different between the two (correct me if I'm wrong) - I don't think there is much of a purely artistic process with DALL-E, although I do like to use DALL-E Mini thumbnails as start images or upscale testers.

    >Real-time traversal of the generation space is absolutely key for getting the output you want.

    I've been sketching around a two-person browser game where a pair of prompters can plug things in together in real-time :D

  • jsiaajdsdaa 3 years ago

    Another interesting thing with prompt engineering is that attempt #1 with prompt x might yield something you don't want, but attempt n might yield something you do :)

o_____________o 3 years ago

Great document.

Damn I am salivating to get access to Dall-E for some projects. Been on the waiting list for quite a while.

I've been experimenting with Midjourney, which is amazing for spooky/ethereal artwork, but it struggles with complex prompts and realism.

  • zitterbewegung 3 years ago
    • pjgalbraith 3 years ago

      That's an open source recreation based on DALL-E 1. It's different to DALL-E 2, if you want that look for DALLE2-pytorch, but note that it hasn't been trained fully yet.

    • konfusinomicon 3 years ago

      My propmt of 'penguin smoking a bong' does not disappoint on either, although hugging face more accurately portrayed the act of smoking, while replicate gave me images of penguin shaped bongs

      • astrange 3 years ago

        Replicate is a newer version being trained on the same data set so it should theoretically catch up soon, no guarantees of course.

    • Nursie 3 years ago

      DALL-E mini/Craiyon is fantastic, but it doesn't compare to DALL-E2 at present, when you're talking about photorealism.

      That said, some styles (Comic book spreads) seem to come out better on Craiyon. And DALLE 2 does not know what a Crungus is.

      • eru 3 years ago

        Given that Crungus has now entered the Internet, the next version will certainly know what it a Crungus.

    • Centmo 3 years ago

      Is this significantly different from Dall-E2?

      • dmd 3 years ago

        The model is roughly 4 orders of magnitude smaller.

        • muzani 3 years ago

          That's a nice way of saying it's 10000 times worse. It's just worlds apart.

          • travbrack 3 years ago

            Idk, it's pretty damn good at a lot of things, still. It's definitely very useful. Mega, at least. Mini is ok.

  • DecayingOrganic 3 years ago

    Hang in there — I only got my invitation a couple days ago. They're still rolling out invitations at a steady pace. But, just as a side note, one of the first things they tell you is that they own the full copyright for any images you generate.

    You definitely have to play around with prompts to get a feel for how it works and to maximize the chance of getting something closer to what you want.

    • woojoo666 3 years ago

      When did you sign up? I just signed up, and it sounds like it takes a year to get access, probably longer now. It's a bit frustrating because I didn't sign up when it came out because I didn't need it at the time, but now I'm afraid of waiting a year when I do. These types of waitlist systems encourage everybody to sign up for everything on the off-chance that they might need it later. Wish they just went with a simple pay-as-you-go model (with free access for researchers and other special cases who request it), like how Copilot does it.

      • DecayingOrganic 3 years ago

        I signed up just over a month ago and from what I've seen, it looks like you won't have to wait more than two months to get your invite. A lot of people who signed up around the same time as me have already received their invites, so it looks like they're speeding things up and getting ready for a public launch soon.

    • nyanpasu64 3 years ago

      I don't think the provider of an AI image generator service can decide they own the copyrights to it (perhaps they can require you assign the copyrights, though it may not even be copyrightable?), only courts can (and they decided the person setting up cameras for monkeys didn't own the copyrights to the monkey photos)?

      • geoelectric 3 years ago

        Courts only decided the monkey couldn't copyright the photo (the PETA case).

        The copyright office claimed works created by a non-human aren't copyrightable at all when they refused Slater, but that was never challenged or decided in court. It's not a slam dunk, since the human had to do something to set up the situation and he did it specifically to maximize the chance of the camera recording a monkey selfie.

        If I set up a rube goldberg machine to snap the photo when the wind blows hard enough, how far removed from the final step do I have to get before it's not me owning the result anymore? That's the essence of the case, had it gone to court, probably the essence here too.

        My guess is the creativity needed for the prompt would make the output at least a jointly derived work regardless of any assignment disclaimers--pretty sure you can't casually transfer copyright ownership outside a work for hire agreement, only grant licenses--but IANAL and that's just a guess.

      • visarga 3 years ago

        DALL-E needs human input to start generating, the monkey pressed the shutter all on its own.

  • noduerme 3 years ago

    I've played with Midjourney for awhile and just got my invite to DALL-E last night. One thing I think is really cool about Midjourney is the ability to give it image URLs as part of the prompts. I can't say I've had tremendous success with it, and it still feels a little half-baked, but I wish DALL-E had something along those lines. (Unless it does and I'm missing it). It's much easier to show examples of a particular style than to try to describe it, especially if it isn't something specifically named in the AI's training set.

    • pjgalbraith 3 years ago

      You can upload an image to DALL-E, edit it and add a prompt to it as well.

      • astrange 3 years ago

        DALLE2 isn't as flexible as the more open colab notebooks here; you can do "variations" of an image but you can't edit an image except through inpainting, so it's hard to generate "AI art" style images of the kind Midjourney and Diffusion are good at.

        It also won't allow uploading images with faces in them.

  • kromem 3 years ago

    Just got mine last night. I think they are scaling up invites in the past few days.

    • raunak 3 years ago

      Same here - I got mine 2 days ago. Signed up when it first dropped.

  • kriro 3 years ago

    I'm also waiting but only put myself on the waitlist recently. I want to use it to generate synthetic image datasets from text descriptions. Very curious to explore the depth of what can be generated.

  • margoguryan 3 years ago

    Get familiar with CLIP regardless! I have very little interest in DALL-E as an artist/prompter but as a futurist it is quite exciting.

  • muzani 3 years ago

    I use both too. Dall E has heavy restrictions. It's basically G rated, so no horror. And no real world stuff like "Donald Trump with a mohawk".

    MJ falls apart when you ask for fine detail. It's a bit of the AI cliche where you have to describe the colour, shape, etc in detail to mold what you want. Asking for a "monkey, gorilla, and chimp riding a bicycle" might have a chimp riding a monkey-gorilla as a bicycle.

    Dall E is a lot better with words. It seems to "smooth" some stuff. Like asking for a bone axe will still show regular axes.

    But MJ is probably the best choice if you want to do landscapes and stuff, especially horror/dystopian themed.

alexjray 3 years ago

The Open AI clear content policy is quite interesting to me. It's reasonable but clearly controlling.

skybrian 3 years ago

Nice! I was wondering why there are example images of real-looking people, but it seems this is allowed now:

https://www.vice.com/en/article/g5vbx9/dall-e-is-now-generat...

  • IshKebab 3 years ago

    Hmm I signed up 2 days ago and it still says "Please don't share images of realistic faces." when you sign up.

tracyhenry 3 years ago

Based on this, an interesting project would be paraphrasing any regular prompt into a prompt that works for DALLE-2.

yoyopa 3 years ago

i really don't understand how people can appreciate something like this. to me it just filling the world with literally mindless garbage.

  • andybak 3 years ago

    I really don't understand how someone wouldn't find this incredibly fascinating as well as intensely fun.

  • estevaoam 3 years ago

    Sure, because having a synthetic intelligence that seems to understand complex concepts to create coherent visual art is something humans are used to.

  • oxplot 3 years ago

    Mindless garbage is what majority of humans create in every field.

  • muzani 3 years ago

    It's more similar to photography/fishing than other art forms.

aantix 3 years ago

Dall-e still has a lot of work to be done with face construction.

Maybe that’s a feature not a bug.

  • muzani 3 years ago

    It's seems to be by far the best of any other drawing AI besides the "this person does not exist" series, but those are quite specialized.

    You could be right though. It does "digital art" well, but realistic faces poorly, and they slap down lots of restrictions to avoid deepfaking.

    • astrange 3 years ago

      Google's internal models (Imagen and Parti) are much better. It looks like DALLE2 is just not big enough to accurately draw faces, which are very detailed things.

      "This person doesn't exist" uses StyleGAN which can definitely do faces, but can't do general pictures.

      • muzani 3 years ago

        Are there samples of faces by the Google models? The websites don't seem to show any. Though their 20B samples are incredibly impressive.

        • astrange 3 years ago

          There's animal faces. Google employees have been tweeting a lot more image samples, though I don't remember if any have human faces.

          (Its output seems to be a lot more aligned to the input than DALL-E2, but also less "artistic" and more like it just did exactly what you said.)

  • gfodor 3 years ago

    I think they’re not training on faces on purpose.

    • pjgalbraith 3 years ago

      You are probably right. Having used it I sometimes get images with white polygons covering the faces of people as if they have been blanked out.

godmode2019 3 years ago

Can anybody recommend a prompt engineering resource for language models?

Interesting topic

alana314 3 years ago

This is great, lots of good ideas in the deck.

totetsu 3 years ago

There is some shared Google docs in the dalle2 discord community about this too.

seydor 3 years ago

What's the copyright situation for images from dalle/imagen?

  • jazzyjackson 3 years ago

    AFAIK the only lawsuit that tests this so far was a kind of weird case where the programmer was trying to register his algorithm as the creator of the image, as a "work-for-hire". The copyright office's reasoning however banged on about the necessity of "human authorship"

    > The Office also stated that it would not “abandon its longstanding interpretation of the Copyright Act, Supreme Court, and lower court judicial precedent that a work meets the legal and formal requirements of copyright protection only if it is created by a human author.”

    https://www.copyright.gov/rulings-filings/review-board/docs/...

codeshaunted 3 years ago

This would be super useful if I actually had access :P

trention 3 years ago

Calling this "engineering" is just beyond parody.

  • wnkrshm 3 years ago

    It's as much engineering as SEO. Though with the 'prompt engineering' it's the human brain trying to coax something out of the black box - ironically, an algorithm might be better at generating the prompts after being given points in its parameter space that fit the aesthetic direction the user wants to explore.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection