Settings

Theme

Biases in Apple's Image Playground

giete.ma

43 points by orf 10 months ago · 22 comments

Reader

thisismyswamp 10 months ago

AI safety people worrying that basketball players don't have a perfectly balanced ethnical representation while mega corporations are trying to establish a monopoly on intelligence

madeofpalk 10 months ago

I don't know whether I would call it bias or not (and this is Apple's model falling for the same "poor means black" fallacy we've seen before), but I've found their image generation models to be incredible poor at matching the photo you provide it. I've done it with many different photos of myself and they all vary wildly and look nothing like myself.

It's comical how bad Apple's image generation models are.

givinguflac 10 months ago

“I could not replicate the above results with different photos, although I imagine that this will be possible with more effort.”

I have a feeling that the red background lighting in that image is what is causing confusion for the model.

That being said, I’m not surprised and I’m not sure there’s an obvious solution given current tech. I think apple is making the right choices here to “safely” or benignly provide a tiptoe into image generation for the public.

  • jbs789 10 months ago

    This was my gut reaction too.

    I don’t know a tonne about the models but the white balance and lighting is quite unusual in this photo.

    As well as the presence of another person with a darker skin tone.

    So a person looking at the photo knows it’s a white dude but a machine has a harder time.

  • madeofpalk 10 months ago

    The "obvious solution" is just a bigger model with better training data, right? Which seems to be against Apple's goals of trying to do this stuff with smaller models on device.

cranium 10 months ago

I had one extensive exchange with ChatGPT to generate an image of a man and a woman working together on a leather crafting project. No matter the prompt, the man was systematically the one "working" and the woman being there to assist.

Bias correction in images feels a lot more primitive than in text.

ericmason 10 months ago

Seems like more of a bug than bias. The problem is in ignoring the appearance of the person in the first place. It's a statistical model, and of course there are more black rappers and white investment bankers. If it noticed that the person was white to begin with, and applied that trait, it wouldn't have to guess about the race at all.

  • prododev 10 months ago

    > It's a statistical model, and of course there are more black rappers and white investment bankers

    Yes, this is what the author is pointing out - there's a statistical bias in the dataset that is showing in the results.

    • gruez 10 months ago

      Is it a "statistical bias" if it reflects the underlying data? Is it "bias" to generate mostly male lumberjacks, even though most are male?

      • prododev 10 months ago

        Yes. The term of art for this is "demographic bias" and it's exactly what you describe -- the population set has itself a skew for or against some demographic.

        An ML image generator designed to repaint someone as a lumberjack should work equally well for all users, no matter the actual real world demographics. So the training dataset needs to account for this demographic bias if it wants to not overfit.

        This isn't some recent "woke" phenomena, this has been known about large ML projects for at least a decade, if not longer.

        If you are training a model to respond on automated test failures, you don't want to sample real world test data in proportion to automated test results, because most automated tests pass. This is also demographic bias and needs to be handled depending on what you want the model to learn.

miggol 10 months ago

All this marketability tuning may some day result in models which are extremely finely attuned to our current societal norms and taboos.

At which point the models will stop reinforcing (racial/gender) biases and start reinforcing said taboos instead. I don't think anyone wants that either

  • slowmovintarget 10 months ago

    This is the real danger: Baking ideological bias into a utility function for the worst of all worlds.

    An LLM can't be ethical because mercy cannot be computed through a distance function. Truth isn't weighted, and justice requires context windows so large that humans can barely manage it and often fail.

    We can tune these things to be as inoffensive as possible and they will deliver a seeming that is impenetrable to the casual user, because that satisfies the utility function. That seeming will be worse than the loss of consensus reality we've experienced in the last few decades.

    I don't find it frightening that LLMs spit out things that are wrong. I find it frightening that so many people are ready and willing to cede the details to these programs and abrogate critical thought.

amelius 10 months ago

> This input

Honestly, the input doesn't seem very well chosen. It is a very low resolution picture of someone, with red eyes, in a circle, with a grey icon partially on top of it, and with somebody else in the picture, half outside the frame.

rafram 10 months ago

How do you solve this without getting Gemini-style racially diverse Nazis?

  • Tinos 10 months ago

    I wonder if adding a simple "Describe yourself" feature along with the image to help guide the model would fix this?

    Like if the model was told he is a white male + given the image

  • Majestic121 10 months ago

    Since it's taking a photo of the user as input, force match the ethnicity of the input photo and you're good.

  • david-gpu 10 months ago

    Yeah, I wondered the same.

    Maybe attempt to predict and match the biases of the user prompting it? That may cause the least amount of friction.

    Or refuse to show images where the input data proved to be strongly correlated with sensitive traits like gender or ethnicity.

    I don't have a good solution.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection