Settings

Theme

Paragraph Pollution: AI is (probably) greener than you typing on a laptop

blog.plinth.org.uk

3 points by tomjohnneill 2 years ago · 12 comments

Reader

TheLoafOfBread 2 years ago

Random text generator is even greener than AI and me typing. Does not mean that result is understandable or even coherent.

dartos 2 years ago

Pretty shallow analysis.

How much energy was used for training chat gpt 2,3 and 4?

Also

> We’ll assume that AI models have roughly the same split of operating costs as a typical data centre

I don’t think this is a good assumption. GPT is not a typical application, it requires a massive amount of power hungry GPUs.

It’d be better to compare the power cost to non-asic crypto mining farms.

  • tomjohnneillOP 2 years ago

    That's fair about ignoring the training cost. I did write a bit more going into that in a follow up piece here: https://notfunatparties.substack.com/p/ai-is-good-for-the-pl...

    Do you have any better sources for the power usage stats? It would be good to get a bit closer on that front. Having said that, even if the cost share is closer to 80%, that still puts it on par with a laptop for an average person.

    • dartos 2 years ago

      Well openai has about 30k A100s https://www.tomshardware.com/news/chatgpt-nvidia-30000-gpus

      What’s the power consumption on that assuming full load at all times?

    • dartos 2 years ago

      Also, I would expect openai to be taking a loss on each individual inference request as they also have a monthly fee, dalle, and loads of VC capital.

      No source for that though, I just wouldn’t assume that they’re breaking even

      • tomjohnneillOP 2 years ago

        I can definitely imagine they're not covering the amortised cost of the training with the cost per individual inference request. It seems less likely to me that they're making a significant loss on each subsequent request, but again no source from me on that either.

        Looking a bit more into this, I found this paper: https://arxiv.org/pdf/2311.16863.pdf. It references a table saying that text generation uses 0.047 kWh per 1000 inferences, which is 1-2 orders of magnitude lower than my estimate. Though that is for GPT2, so possibly tracks to something roughly in the ~0.001 kWh per inference for GPT3.5.

rsynnott 2 years ago

Do people actually use gpt-3.5 turbo? Full-fat gpt-4 is 40 times more expensive...

  • tomjohnneillOP 2 years ago

    It's the default for the free version of ChatGPT no? That's what the majority of people use.

    • rsynnott 2 years ago

      So, I don't use this stuff, but every time I see someone complaining about it doing something stupid, the response they get tends to be "that's because it's GPT-3, everyone uses GPT-4 now"; I took this on face value.

      • tomjohnneillOP 2 years ago

        I think it's a case of tech bubble vs the rest of the world. Most people are not subscribing to the paid version of ChatGPT, but a lot of people who spend a lot of time with these things are.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection