Not by AI
notbyai.fyi> If humans rely solely on AI to generate content moving forward, any new content generated by AI may just be recycled content from the past.
To be honest, I’d struggle to find any human content that is not recycled from the past. I guess we are still better than AI at remixing sufficiently novel combinations of concepts but I think much of this AI content aversion comes from some kind of phobia, or an unwillingness to admit humans are shockingly unoriginal.
Philosophical food for thought: if all content is recycled / mixed from the past, when did innovation and creativity stop?
In other words, if there is truly no such thing as an "original idea", then how did the ideas that we are pulling from, deriving from and combining come into existence?
If all we are saying is that existing ideas inspire new ones, or that most human generated content is derivative, then I completely agree with you. I don't see how that proposition could be controversial at all.
But it seems to me that some people, at some points in time, somewhere have and will continue to contribute something original at least on occasion. Even if the "original" idea is 1% of the "intellectual product" and 99% is reusing existing concepts. To insist otherwise is to insist that we hit peak human innovation somewhere along the line and there's nowhere left to go.
The whole idea of "there is no such thing as an original idea" is utterly stupid. It's just a hyperbole and some people are taking it literally.
Of course there are original ideas. "C is not an original idea, because it's just mixing A and B..." ok so there are only two possbilities:
1. Someone else did mixing A and B before. Okay then we can recursively find the first person who mixed A and B.
2. No one else did mixing A and B before. The "Mixing A and B" is a new idea.
For some incomprehensible reasons, people seem to accept "mixing A and B is not a new idea because A and B already exist." Like... why? Since the four basic nucleobase existed, there haven't been any new DNA creatures?
The real reason that it feels like there isn't new idea is simple: the "mixing" process happens gradually. Even if you come up with a really good "X + Z" idea, people (read: market) might not accept it because what they're familiar with is "X + Y".
So you put a bit of Z to make "X + 0.9Y + 0.1Z", and another person who is as smart as you makes "X + 0.8Y + 0.2Z"... then when "X + Z" is officially a thing, people outside think you smart-ass guys were all just copying & pasting each other.
> For some incomprehensible reasons, people seem to accept "mixing A and B is not a new idea because A and B already exist." Like... why?
In copyright-law (and I think patents?), there is the Threshold of originality. Basically you evaluate how much work and change went into something, to distinguish it from other works. Which IMHO comes down to how much of a and b can you still see in the mix, and how much did mixing them changed them in the end product. If the change is low, then people more likely consider it as not innovative.
Or how Homer Simpson once said: “People are afraid of new things. You should have taken an existing product and put a clock in it or something.”
Copyright laws are a tool, not a reason by itself. They are aimed to give money to inventors, not to judge innovation. You can argue they do judge in order to accomplish their aim, but that's the byproduct, not the aim. So let's not give the laws more meaning and usage than they deserve.
on similar (I think) line of thought, but further along,
the reason to have property (also a tool, or social-technology to be slightly more precise), i.e. ownership over things is a byproduct of having a marketplace in which we can trade.
In this view, the marketplace (or trade, depending on your outlook) is THE WAY to come together and put all our skills together in order to co-create (con-struct) something much larger.
Alas, capitalism means that only a very few chosen elites (which are, by this point, mostly autonomous corporations) get to decide what we all work together to co-create. We as human individuals are only able to participate in this figurative "marketplace" (understood as a social-technology) as the part of merchandise, specifically as 'commodity'-style labor.
Do you have an alternative to suggest? Under most of human history the chosen elites inherited their positions from an ancestor who won an important war. Capitalism may not be an even playing field but the process that lets Elon Musk build landing Rockets is an order of magnitude more even than the process that let Kuresh build pyramids. Is there a model that might be more even still?
nevermind
Ah, that's fair. I formulate a similar observation by saying that digital assets inhabit a post-scarcity economy which doesn't work the same as a resource-based scarcity one, and using the same money for both requires more suspension of disbelief than most of us are capable of.
Totally agree. The idea that ideas/bits can be owned is a slippery slope with bad policy at the bottom. Maybe it was once necessary as a sort of training wheels for innovation, but it has long since become more of a drag than it's worth.
Copyrights don't protect novelty. Two people can independently create the same thing and both be entitled to copyright. You cannot do that in patent. Only one is an invention in the eyes of the law.
The threshold for originality in copyright is the bare minimum, so it's not a good standard for "invention". Patents have a much higher bar to overcome, and it is hard to put into a simple explanatory phrase.
I think what you are describing in copyright isn't the issue of originality, but determining what is encompassed in a copyrighted work. The court will analyze what the work consists of, and what of it was your expression, as opposed to someone else's. So, it's not novelty exactly but it's close.
> Copyrights don't protect novelty.
Yes, and no. I can't speak for laws in the USA, as I'm from Europe. But Copyright-laws in the EU do have parts which specifically target mixing and mashups of content, and how much new content of your you need to add to make it count as a separate work. These parts were recently updated, because of YouTube, the meme-culture and so on. But it's my understanding that for music and text-quotes, the whole concept is already longer around. So I guess US-Laws do have something like this too?
So of course, this does not handle novelty on a global scale for all of mankind's knowledge and content, but the general Idea and how to handle it is there.
> Two people can independently create the same thing and both be entitled to copyright.
Similar maybe. But for literally the same, I doubt it. People are getting already sued for having too much similarity.
My point is more that, in the way the law works out, what you get to protect is the part of an expression that is unique to you, which can include mashups, etc. My point being that this is perhaps distinct from novelty, as novelty isn't a requirement of copyright, just independent creation.
>Similar maybe. But for literally the same, I doubt it. People are getting already sued for having too much similarity.
Full stop yes. Independent creation of the same thing is not an issue in copyright law in the US. I do not speak about Euro copyright, as I'm merely a US atty. In US copyright law, a necessary element of infringement is access to the allegedly copied work - if access to the work is rebutted, then there is no copying and no infringement. The copyright stands.
>> Basically you evaluate how much work and change went into something, to distinguish it from other works.
Copyright exemptions (fair use/fair dealing etc) are a million times more complex than that. There are times when one adds almost nothing yet will still get protection. Take political humor. A standard bit is just to point out something funny. Look at how often political comics (John oliver/stewart etc) just play a clip of some politician. No edits. No commentary. The clip speaks for itself. That is still protected speech. Understanding how the comic adds to the clip cannot be expressed in numbers.
You are confusing the "copying" analysis element of infringement, with the affirmative defense of fair use. They are not related. A necessary element of proving infringement is showing what was copied, which most likely requires the court to construe exactly what is covered under a given copyright.
I mean, if the examples are to be believed, GPT4 seems to be able to explain what's funny about a given image. It's not out of the question that it or some future iteration could reduce that very thing to a discrete set of numeric scores.
If we follow this line of thinking: in the, let's call it, "pre-AI" days all of the ideas that we have of the form "Mixing A and B" are ones that at least one human found to be interesting/valuable (and the ones that become common and get remixed must have been interesting/valuable to many humans). Maybe the "originality" of the idea isn't the thing we care about, per se, but rather that fact that somebody though it was interesting to shine a spotlight on this region of the idea space for the first time.
But now that we have software that can generate massive amounts of "Mixing A and B" ideas automatically... I dunno, seems like we're heading toward a sort of "pollution of the idea atmosphere" where we get a ton of new/remixed ideas dumped into the world that haven't passed through that initial filter of a human vetting them as particularly useful or valuable or interesting in the first place.
> I dunno, seems like we're heading toward a sort of "pollution of the idea atmosphere" where we get a ton of new/remixed ideas dumped into the world that haven't passed through that initial filter of a human vetting them
We already have that. Observe: ahem “It is an industrial capacity, that many people like the Reddit poem person who does the cow thing. And everyone delighted in the nonsense of words that are sense.” Or, perhaps you want something on-topic? “Your ideas about ideas are mere ideas. Ideas that are ideas are ideas, but these ideas are not ideas. lol!” Perhaps an unoriginal, unfiltered synthesis of some good ideas? “It is a truth universally acknowledged that a potato peeler could save you 5 Altairian dollars a day.”
We don't need AI to produce unoriginal, unfiltered internet comments. Certain YouTube comments sections provide a great example (though the problem with YouTube comments isn't as bad as people tend to make out). We humans already have social mechanisms to keep the good ideas and ditch the bad ideas, even when many of the humans aren't bothering to filter their own output.
There's two variations, one stupid one not. The stupid one is "novelty no longer exists and maybe never did". You've covered it. The not stupid one is this question about the closure under creativity in the same sense that numbers are closed under multiplication and addition. Sure you can make infinite numbers, but you can never multiply and add your way to a cat, because a cat isn't a number. Is everything we do rearrangement of basic known parts much like numerics is rearrangement of basic known digits? Its a deep question.
If you need examples of creativity, look at the math. Math is full of the novel (at the time of invention) concepts: complex numbers, groups, automorphisms, etc., etc. None of these concepts can be obtained by rearranging the "parts". Sometimes a new concept requires the invention of "parts" that had never existed before, and, in turn, becomes a "part" in the future inventions.
Your "deep question" fails to address gspencley's point: If everything we do is a rearrangement of basic known parts, where and when did we get the basic known parts? From cavemen? From apes? From protozoa? Someone, at some time came up with the basic known parts.
So, was that real creativity, or not? If not, why not? If the parts are really "basic", they didn't come from other, pre-existing parts.
And if it was real creativity, why is it impossible for us to do the same?
All the advances relying on new concepts and discoveries or inventions in science, math, logic and technology would suggest that the basic known parts are not set. So yes, there's been plenty of genuine novelty. Culture as well as new forms of music, food and entertainment come into existence. New words are always being coined as languages are fluid.
In the context of this thread however, an AI mixing a randomly selected A and B makes it generate an idea that is just as original as a human's.
I do believe AI sometimes generates an idea that is just as original as a human's.
I don't believe originality is that valuable. It's a positive term to me, but just slightly.
I've seen a lot people on HN credit the SV companies' "free food" practice to google. Do they believe free food for your employees is a new idea that someone in google came up with? I don't think so.
> 2. No one else did mixing A and B before. The "Mixing A and B" is a new idea.
With such definition, GPT-4 can create new ideas.
It definitely can. GPT-3 can too. You know the joke about "idea guy"? Who has "idea" but no programming/design skill and no connection to bring funding? New ideas aren't always very valuable.
At least GPT brings funding sometimes.
I'm finding it spectacular for generating ideas. Ask it for "40 ideas for plugins I could build for Datasette" for example. The results are comparable to what I've seen from a meeting room full of humans for an hour, but GPT-4 generated them in less than a minute.
Really? I feel like if you have to ask what plugins to build you’re doing it wrong. You should know what plugins to build by being a user of Datasette and understanding its limitations for your self.
I know Datasette better than anyone else, and I still couldn't come up with 40 useful ideas in less than a minute.
Once you have an AI to also prototype the plugin ideas, such an idea list may become useful. I've always seen that the owners of any product have more product improvement ideas than they could economically implement.
I asked GPT-4 to implement a simple plugin yesterday, and the code was 90% right out of the box.
I’ve always interpreted statements like “there’s no such thing as an original idea” or “everything’s a remix” not literally, but rather as a reminder that new ideas are born from a cultural context and the ideas in it, using those ideas like a language of creative expression.
I often take it a step further and use it as a reminder that new ideas are rarely born from one person but rather the contributions of many, each too small to be noticed as an “original idea”, but when viewed collectively, can be revolutionary.
Isn't the "no such thing as an original idea" point more about there being so many people alive now that one cannot think of anything that somebody else hasn't though of as well?
So many advancements (like the airplane) often get invented simultaneously by different people in completely different areas of the globe without any connection to each other.
"C is not an original idea, because it's just mixing A and B..."
"A fax machine is just a telephone and a waffle iron!" —Abraham Simpson
I think that most people are attached to the idea that humans have some type of "soul", even if they don't call it that. This soul, as the thinking goes, is completely separate from material reality, and allows people to have ideas that come "out of nowhere" and are not inspired by anything in particular. In this way, humans are supposedly different from machines, whose outputs are a mere function of their inputs.
This is the root of the objection to AI. To say that humans are similar would imply that a person's thoughts are a function of their sensory experiences, and that is not a popular view.
Similarly, people like to think that a person's behavior is not a mere function of their experience- that they are capable of "choosing" how to react to their circumstances. They will accept no explanation that appeals to material conditions or biology. A man steals bread not because he is poor and hungry, nor because of some chemical reaction in his brain, but because he "chose" to steal. The part that chooses would his "soul", which cannot be explained in terms of cause-and-effect.
You’re going to have a lot of unproductive conversations if this what you believe other people think.
There’s a much richer discussion going on about current-generation AI and its prospects that is grounded in plainly materialist and technical insights, with thoughtful differences of opinion about what current research demonstrates and about what the sophistication of biological/neurological machines may involve in comparison.
You might find that conversation more enriching if you tune into it instead of the one you seem to be focused on.
No, this a straw man. The line of reasoning here, first, makes a lot of presuppositions about AI and it is overly reductionist about human intelligence. I see this comment constantly in all HN threads about AI, “humans intelligence is nothing but probabilistic sparks” or whatever just actually means.
> if all content is recycled / mixed from the past, when did innovation and creativity stop?
And the obvious answer of course is that recycling / mixing can be innovative and creative. The dichotomy only exists in our heads.
I believe this is because of how our culture choose to reward intelectual output. Your output needs to appear sufficiently original to be worthy of reward and protection.
For example a musician who creates a new song is not rewarded, in fact punished for naming all the influences they had coming in to said new song. (both morally, by people who would think less of them, and by lawsuits asking for a share of their profit.)
I feel like this heavily requires “citation needed”. There are plenty of these “inside the studio” type interviews where an artist describes at length their inspirations and the other artists they admired and wanted to mimic in some way.
> I feel like this heavily requires “citation needed”.
You might have missed that I was stating my belief. I can only cite myself as the ultimate authority on what beliefs I have or don't have.
If instead of believing something I will know something you will notice from the lack of "I believe" and the appropriate citations. If we would need to provide citations for any belief we could not talk about theories and ideas we have which as of yet lack the sufficient evidentiary backing.
Now of course it sounds like what you are trying to express is that you are disagreeing with my belief. Which is perfectly fine.
> There are plenty of these “inside the studio” type interviews where an artist describes at length their inspirations and the other artists they admired and wanted to mimic in some way.
Yes. Humans are full of contradictions therefore you won't see black-and-white behaviour from them.
I think this is mostly referring to the Blurred Lines verdict. On the other hand, artists have a strong incentive to record their songwriting and recording process, so that they can head off any potential disputes about writing credits from people actually in the studio (this situation is statistically much more likely than somebody not involved suing after the song's release).
> I think this is mostly referring to the Blurred Lines verdict.
Interesting. I was mostly referring to the "Shake It Off" and the "Thinking Out Loud" copyright cases.
The "Shake It Off" case seems like it involves lyrical content, which had always been a primary concern of music copyright, even before "Blurred Lines". The "Thinking Out Loud" case seems to involve similar issues to the "Blurred Lines" case, but was filed long after[1]:
> While the Gaye family is not involved in the “Thinking Out Loud” lawsuits against Sheeran, the precedent set in their case looms over all new copyright infringement claims. Notably, the court took into account not just sheet music but studio arrangement too and ruled that “Blurred Lines” significantly aped the vibe of “Got to Give It Up,” something that had previously been beyond copyright protection.
[1]: https://rollingstoneindia.com/copyright-infringement-lawsuit....
I have had exactly this argument before and this has always been my issue with the claim. I recall people saying this about music when I was in college, an example I find especially frustrating because, per my friends argument, all contemporary music is influenced by, say, Bob Dylan, the Beatles, 'the greats' etc, and nothing new can truly be created in the shadow of these classics. It's nuts because it ignores all of human civilization up to that point, and doesn't give credit to people in the present day for being able to be original.
But I think the way you have put it here is more clear than anything I've been able to manage. I think I put it in a more extreme form, which is to say that the earth had no life on it for ~3 billion years, and still no humans until ~100k years ago. At some point the first music was made. And if it could be original then, whatever it was that made that possible, should presumably make it possible now, too.
That "1%" original is almost always just 1% uncited/unknown inspiration.
If you listen to the stories of how things like major inventions came into being, it's almost always some domain expert visited some domain in which they were not an expert and inspiration struck on how to combine the domains.
Otherwise, the rest of invention is either human imitations or improvements to things found in nature or not inventions at all but discoveries stumbled upon by accident/experimentation.
Nothing is just willed into existence by pure "thought". Every thought you've had is a product of the things you've been surrounded by.
So Ramanujan with no formal training in mathematics was really surrounded by advanced education of mathematics and had no original ideas? I feel if more people studied philosophy they may discover how problematic these reductions are.
Math isn't a human invention at all. It's the discovery of the nature of the universe and an expression of the world around us.
What does that even mean? Mathematical ideas are ideas. The notion of continued fractions is not just an expression of the world around us. This is basically hokum. The point is that human intelligence utilized original thoughts in the case of Ramanujan.
One of your three sheep is black. 1/3 - that's a fraction, a idealized representation of the real world. Now you take that concept and do it again over and over ad infinitum.
That's not a magical invention that has no bearing on the world around us. That's just an expression of real world taken to its logical conclusion. It was not invented from a vacuum, it was invented from the input of the world around us.
What is "hokum" is the idea that the human mind would somehow come up with meaningful output without meaningful input to seed it.
I think the simpler answer is that GP's assertion is just incorrect, and that humans regularly create genuinely original content.
I'd say it's possible that there have truly not been any original ideas, and the earliest human ideas might've been just remixing observed natural phenomena. What would muddy the waters there is that humans are complex organisms that in turn exhibit naturally complex behaviors both on an individual and a societal level. And there's even emotions that we feel due to our biology. But if you define all these as phenomena we observed about ourselves, I think my initial assertion still makes sense.
The earliest humans didn't know everything we know today. How could they had any ideas about digital simulations, branching timelines or nuclear war?
There's a million things today that didn't exist back then, and many things that came to exist culturally over time, or were discovered. Do you think they were sitting around the fire talking about how the molecular motion registers as a conscious heat sensation in animals or what the proper interpretation of QM is?
I don't have a definite conclusion on anything, but I remember having this discussion with a friend a long time ago, and it made me think... What he was trying to convince me was that all of those things you mention can be seen as subclasses of other natural things:
Digital simulations are self-explanatory - All things simulated must have an existing thing they are simulating. Digital is one type of representation of an already existing thing. It can be seen as a subclass of concept/drawing/painting/.
Branching timelines are more abstract, but it's ultimately based on the idea of a tree. Dilemmas over branching decisions have existed as long as we have been able to think. We branch timelines in our imaginations while playing chess or doing any strategic endeavor where decisions and responses to those decisions matter. FFT: Decision tree, binary tree, random forests, etc. These don't quite cover the complexity of QM, but it's hard to ignore that there is overlap.
A nuke is a type of bomb, which is a type of rapid expansion, which is just an expansion of something (volcano, lightning, comets, etc...)
My friend's point was that of course a caveman won't be thinking about quantum mechanics, but they were thinking about the more basic things that ultimately led to their descendants thinking about QM.
Just food for thought.
No true original ideas and there was just remixing natural phenomena? Really? So writing was just remixing all the writing that people saw volcanoes and rainstorms do?
I would advise reading allllllll the way till the second sentence in my comment.
Writing was a remix of pictorial representation, which was a remix of the way things like ox heads, flowing water, and birds look.
>> No true original ideas and there was just remixing natural phenomena? Really? So writing was just remixing all the writing that people saw volcanoes and rainstorms do?
> Writing was a remix of pictorial representation, which was a remix of the way things like ox heads, flowing water, and birds look.
That's the kind of absurd reasoning when you take some proposition and try to defend it to the death even though its wrong.
I'm not the one preserving the word 'remix' in this context, I just wanted to put it in the terms that GP was using. It certainly sounds more absurd when you use that word.
But the point is that human creations are always based on something else, so pointing out that AI is basing its output on a database of previous work does not necessarily mean that it is not creating new art, or being creative, or however else you want to put it.
It's like the ridiculous lawsuits that claim a song is stolen because it uses the same chord progression as an older song, where there are almost always centuries-old examples of that same progression from Beethoven et al.
Your point seems to be that the reasoning is absurd because it's defending an incorrect premise, and the premise is obviously wrong because the reasoning is absurd, so I'm not you'll want to know what the people on the other side of this discussion think.
Combinatoric complexity blows up fast
Your comment immediately made me think of The Beatles. They are responsible for so many original things. They launched the album era and invented the concept album. They brought in entirely new instruments into rock music. They changed the culture around sexual expressiveness, and they introduced the western world to eastern philosophies.
But when you start to scrutinize everything they did, there are many clear examples of things and people that did those things first and directly influenced them. However, they may have been the first to do it in their specific flavor, which is undoubtedly original.
The most original thing I can think of that The Beatles did was add sound to the runoff track at the end of Sgt. Pepper’s Lonely Hearts Club Band, so that it keeps playing the sound infinitely until the needle is removed on the turntable. I’m sure someone with more insight can point to another musician doing that first, but at The Beatles’ scale, we collectively come to believe they were the first to do it.
> Philosophical food for thought: if all content is recycled / mixed from the past, when did innovation and creativity stop?
Somewhere around 2016, when the golden Age of Internet-Content ended, as the last undiscovered part was added to the mix...
But jokes aside, does innovation really need to be new? Or would it still innovative if you don't know it when you create something you for you? Take for example all the retro-waves which regularly come back, where people discover something for them completely new, and make up some for them completely new things, which the generations for them already did long ago.
> In other words, if there is truly no such thing as an "original idea"
Does Original mean 100% new? And would it demand that all parts are new, or would it still original if we just arrange old parts in a new way?
I think this is purely a statistics problem. Like all breakthroughs are actually _discoveries_ of things that already do exist or mechanisms that can be exploited somehow.
There is a huge amount of information flowing around us in realtime, and linking some thought/thing to something else, which then might have implications (subsequent discoveries) is statistics. We do have people with a better wetware so they have increased %chance do recognize something in front of them as well as linking it in a "novel way", but this is not really neccessary, it only increases %chances.
Even pretty radical things can be explained this exact way, up to having some sort of brain "damage" (or drugs like LSD) that changes mental pathways and have a %chance to be actually right and lead to discoveries, but also the %chance here is very low. (not every trip is a breakthrough in art/science, right?)... a bit like iE people that can smell sounds or link some natural phenomenon with an emotional situation between some people and write about it.
> peak human innovation
that might simply a nonsense question if you follow my previous argument. there simply is no real "innovation" - only "discoveries" of stuff. and we meatbags with very limitied/unreliable wetware at some point can't "progress" any more, but non-biologically flawed systems might overcome our limitations and do just that for us.
with our naked eyes, we cannot see things like infrared light, but it exists and some tools helped us to understand everything around it somewhat. now we're building more advanced tooling to get through the next plateaus - just like we have to build very sophisticated tooling to watch further into space or more closely into fractions of atoms, which in turn lead to more discoveries, and ultimately "innovation" we can feel in our lives.
the only thing I will 100% agree to is that a real AGI will be the _last_ discovery/innovation humanity will ever make - and that is a good thing - it means we finally overcome our biological limits somewhat.
There's nothin' you can do that can't be done
Nothin' you can sing that can't be sung
Nothin' you can say, but you can learn how to play the game
It's easy
Nothin' you can make that can't be made
No one you can save that can't be saved
Nothin' you can do, but you can learn how to be you in time
It's easy
All you need is love
Great song, but isn't it just a derivative of songs that came before it? Nothing creative here.
That seems like a straightforward one; if humans are incapable of original thought then the original part probably came from sources that are not human and 'innovation' in the sense of 'an AI couldn't do this' never existed. We just need to make the AI better at interfacing with the real source of novelty - reality - than the current method of trying to interpret existing pictures on the internet.
There will be a big leap forward when we get to teaching these things from video streams so they can get the concept of 3D properly.
"Why did innovation and creativity stop" because ideas are remixed is like saying when do the flowers stop blooming because the sun exists. The sun is an integral piece of why flowers grow.
Read How to Ride a Bicycle. Human creativity is just a lot of work, often connecting novel ideas together while doing so.
Recycling / remixing content from the past *is* innovation and creativity.
> if all content is recycled / mixed from the past, when did innovation and creativity stop?
Your question contains the answer. Remixing begets innovation / creativity begets remixing.
It's just something people say lazily in passing without giving much actual thought to why they're repeating what they saw somewhere else in a similar thread.
Creativity and innovation didn't ever stop.
Its only words and concepts and we keep inventing new ones constantly. We also imbue new meaning to words and concepts over time.
Maybe they didn't stop, but they're converging towards zero. In the past hundred years, artists were coming up with a lot of things that had not existed for all of humanity. Nowadays, it's rare to find truly original style. The space has been explored thoroughly. It's also not just random search. It's no coincidence that most image generation prompts have an existing artist or style in them. If you leave it off, you just get crap.
>Maybe they didn't stop, but they're converging towards zero. In the past hundred years, artists were coming up with a lot of things that had not existed for all of humanity.
How do you distinguish this from survivorship bias? There were probably tons of shitty artists 100 years ago but you haven't heard of them.
> It's no coincidence that most image generation prompts have an existing artist or style in them.
I've been thinking about this. I don't think it's evidence that anything is getting worse, because a lot of the prompts people use are referring to recent artists/styles. Is it possible that leaving out the style/artist leads to an average of every style which doesn't really work?
>In other words, if there is truly no such thing as an "original idea", then how did the ideas that we are pulling from, deriving from and combining come into existence?
By observing nature, and then iterating on that over thousands of years of culture. Of course the question is whether and to what degree the process of iteration and transformation matters.
How would you define an "original idea?" An idea with no origin or source, that spontaneously manifests from the aether? That isn't possible, everything must be extrapolated from something prior.
Yes, it's more accurate to say that ideas exist on a spectrum of novelty, with varying degrees of originality.
> when did innovation and creativity stop?
Or did it ever stop ?
it is like they say "there's nothing new under the sun, but maybe behind it?"
The problem isn't art, it's discovery. There's more new original art being made and published right now than ever before in history. I'm in art school so I see a lot of it. Sure, there's a lot more chaff to cut through than there was when publishing was almost exclusively controlled by large corporations, but it's there.
Art that isn't recycled is almost always an illusion caused by your not being familiar with the things it was remixing. Nobody pulls things out of thin air. Even whacked-out acid trip visions were molded by our cultural experiences and aesthetic. While there are a few people who make large shifts in their artform-- e.g. Jackson Pollock, Jimi Hendrix, and Antoine Carême-- but a) they were still making incremental progress, and b) you'd have a pretty limited set of things to choose from for your entertainment.
PS Edit: Right now AI-generated art is novel, but it has far less potential to meaningfully advance art as time progresses. It will mimic what it sees, but it won't see anything that a creative cultural avant-garde doesn't produce first, and as long as real people are doing it first, there will be eyes that want to see it before an AI algorithm waters it down into some conceptual amalgam of its real form. That probably includes commercial entities who are deliberately trying to make themselves stand out.
The AI aversion has several reasons:
1) The output, while impressive on the surface, is bland and recycled. It will drag down the general level in the same manner that CGI has destroyed movies.
2) People don't want to consume AI generated content in the same manner that they generally don't watch Stockfish vs. Stockfish.
3) It is not phobia, it is disgust at humans being dehumanized.
The phobia seems to be on the side of AI corporations, who quickly step in any discussion that questions their business model.
Regarding point #3.
The problem people have with AI is that it is cheapening the things that make us "special". If anything I can draw, paint write, create, and pour my heart and soul into can be acceptably approximated by a "bot" in five seconds or less, then what do I have to offer to the world beyond my "work" output? The things that creatively inspire me have been commoditized in a way where it's exponentially harder for me to generate value doing them.
Artists and musicians have famously struggled for centuries. A technology that raises the bar on an already exceedingly difficult path (making money doing creative things) makes that particular situation much worse.
A painting or story made by artificial intelligence is not human because it does not have a life story behind it. I still enjoy reading the artist's biography, some of his correspondence, trying to understand his creative process. If you pour your heart into your work we can bond through human experience, maybe we can even sit down and talk. An AI does not fear death or loneliness. Sometimes I think that part of this site's audience only considers art from a consumer-product perspective. For me it goes a little further.
> Sometimes I think that part of this site's audience only considers art from a consumer-product perspective
Imagine you are an artist. You as a patron connect and talk about a piece of art, you walk away happy and feeling connected. He does not sell any art because they are indistinguishable from what the AI can produce with the cost of electricity. Now if you are willing to pay for "bonding over human experience", GPT can probably be trained to do that as well. Generative AI eliminates the livelihood of almost all artists eventually.
You're probably onto something, but then it's also nothing new that we attribute value to handmade goods over industrial goods. Because they're backed by storytelling (real or imagined), rare/exclusive, etc.
It's clear that amidst a deluge of AI-generated content, audience urges for authenticity will rise. Attention is in more finite supply than content, has been even before generative AI; only so much will stick out, and it will perhaps be the most authentic or analog content and goods.
The real question is whether AI/AGI can make it past the "authenticity threshold" and xenophobia to where we also accept AI storytellers and brands as eligible.
Perhaps opening the AI content firehose will force everyone to start carefully curating what they consume. This might actually be an improvement, and might be the way out of today’s limbo where crap content is snuck onto our plates half the time.
Don't we already, though? I mean, how many of the hundreds to thousands of mediocre movies that hit Netflix, Hulu & co every year do you watch? If they were AI-generated, would it actually matter to you?
As far as I can tell, there's just two interesting things to watch out for:
- Personalized AI-generated content. What if the movie were just for you? What if it expired like a Snapchat, never to be seen again by anyone else? Would this tickle us? Would we succumb to new filter bubbles, increasing the distance between us?
- Can AI make the movie that's so good that it sticks out and does become the one we watch collectively?
Ideally, AI art would be tagged on any large platform and AI artists would be forced to compete in a race to the bottom against all their fellow prompters.
I wish this would happen. But, looking at the current world where publication and distribution are dirt cheap and simpler than ever with the internet, most people's media curation is "What's free" as opposed to "What's good." If the new torrent from the firehouse is AI-generated content, careful curation will lose more. The time and money cost become harder to justify.
I think you also discount that handmade items are generally valued for being higher quality and sturdier than mass produced items in many cases. It’s not just this kind of emotional thing.
Exactly this. The vast majority of art is unoriginal and derivative. That doesn’t mean it’s bad art, just that it is not revolutionary.
So what? There are a handful of truly revolutionary artists each generation. Faulting AI for producing merely good and interesting art is missing the point.
I don't know about other people but I don't "fault" the AI anything, nor am I even particularly bothered by AI-generated content per se.
What fills me with dread is the obvious glee over removing human work from sellable products, when there is no viable alternative to working for most people. We've created a world where a lot of jobs can be automated, and that is somehow a bad thing, and it's going to be used to crush people. It's a labor issue not a philosophical one.
Would you be happier if your car / bicycle / city bus was produced entirely by hand, with no tools or automation?
Labor will shift. Bemoaning new efficiencies and new empowerment because they reduce the need for meaningless labor consumption is backwards. We should be sensitive to those whose lives will take an unexpected turn (I may be one of them!), but we should do so in the context of celebrating greater individual leverage to create and produce.
No, as I was trying to imply I don't particularly care about the use of a given technology per se, I care about the consequences of its application to labor.
"Labor will shift" makes it sound like a smooth and automatic process but it is not. People will lose jobs with no equivalent replacements, families will fall into poverty and misery when they hadn't before. In a place like the US with healthcare tied to employment, people will die. And eventually, it will reach a stable equilibrium with fewer jobs that can support a reasonable quality of life. That's what labor shifting looks like, we know because we've seen it over and over again.
The changes might ultimately be net positive for a great many people, but that's small consolation to the people whose lives will be made much much worse during the transition. None of this is inevitable, but historically it's the path we've chosen and we seem even more dedicated to this outcome now than at previous times of change in the relation between labor and survival.
I think a big issue I take with the removal of humans from art creation, or at least discussions around it, is that art is always talked about as being separate from the creator. As a product the person makes.
For many pieces of art, and what draws me to art, is knowing that the creator put a piece of themselves into it. (I'm going to avoid talking about the 'separating art from the artist' discussion, that's separate than what I'm discussing here).
An example I've talked about in the past on HN is Raymond Carver. Carver's short stories are often centered around blue-collar men in the mid west. This is because Carver was a blue-collar man from the mid west. I find that interesting. I enjoy reading about Carver's life, and reading about what people had to say about him.
Essentially, write what you know.
Art made by AI is missing that - and I think the frustrating part for me is that I don't think it's that hard to bring it back.
Show me the prompts used for the art. Tell me why you used those specific prompts. What did those prompts mean to you? I want to know all of those things.
I also think people stop too soon. Why not take what the AI has made, do something to it yourself, and then feed it back into the AI to see what happens?
> Labor will shift.
Until it doesn't. I mean until very recently the narrative justification for extreme automation was it would free up our time to create art and stuff. Now that's in question.
It's all excuses to avoid thinking about the problems the automation technology will cause. Avoidance worked OK in the past, but as these technologies get more and more extreme, the problems will get more and more pressing, and the old lazy excuses won't cut it anymore.
> ...but we should do so in the context of celebrating greater individual leverage to create and produce.
And the end point of that is a small cadre of elites who control the machines, and hordes of people left with no economic value (except, perhaps as slaves for elite amusement, like gladiators and prostitutes).
People who say "labor will shift" tend to have a significant overlap with the people who say "one day AGI will do all the work for us so we can live in paradise"; which, in turn, has significant overlap with the people who don't spend nearly as much time as they should acknowledging that getting from point A to point B is going to be the bloodiest proposition in history and not a guarantee besides. Probably because there is yet another overlap with people who find technology far more fascinating than society, and thus don't care to look beyond first order effects.
Looking for a phobia to describe AI content aversion is an incorrect approach. It yields no explanation other than that “something must be wrong with those people”. Instead, try looking at it from a rational biological standpoint and it makes perfect sense.
We biologically have a desire to live. Part of living is confronting threats to our survival, and hopefully defeating them. Our minds are what provided us with the ability to survive despite there being stronger, more vicious competition out there. Why would we want to surrender our one competitive advantage?
People aren’t stupid. They know that AI will continue to progress (“technology must progress”, says the technologist) and that it threatens their way of life. Truckers know that AIs will automate their jobs. Artists know it will automate theirs. Everyone knows that AI is coming for them, sooner or later. If not in their career, then maybe in their social life, like video games and social media have decimated in-person communication.
Those are things we need in order to survive. Our jobs provide us the money we need to meet our needs, and give some of us meaning in life (no, UBI proponents, receiving a check doesn’t solve all the problems). Our social lives are paramount to our health. What will humans be doing all day when AI has taken all our jobs and we are all talking to chatbots all day which cater to our every proclivity? Maybe that is some folks’ ideal worlds, but certainly not mine.
Given the mixed results we’ve seen with technological innovations in the past, rather than giving a negative connotation to the people who are cautious or concerned about AI, why not listen to them? Rather than having a phobia, maybe they have a valid point.
There are many rational reasons for a person to see these questions differently to you and plenty of explanations of those reasons around. You may disagree with them, but to claim that differing views must spring from phobias or other irrational mental states rather from a different assessment of the evidence is baseless ad-hominem. I would strongly recommend avoiding statements of that form in general as they achieve nothing good.
I could make a similarly disrepectful and unsupported claim about the mental state of people who believe most human creative activity is simply recycling learned ideas, and we could spend some time flinging insults at each other, but why?
Hear me out. In the most basic sense, to believe the mind a box that can output things not based on input is inherently to believe it "irrational" in the truest sense of the word.
It is to believe the mind to be non-functionally pure[1] in the programmatic sense. Output based somehow other than purely on input.
And if that's the case, the question is where does that irrationality come from? A random number generator? Any developer knows there's no such thing as actual random. It's all either list based or based environmental aspects.
I don't see room for an argument to be made that the human mind isn't just mixing ideas/the environment that doesn't rely on some form of magical thinking.
>In the most basic sense, to believe the mind a box that can output things not based on input is inherently to believe it "irrational" in the truest sense of the word.
No, it believes that the mind has an internal state. We have many physical cases of state which is in practice undecidable. Moreover, in many formulation of quantum physics one can have a state which is unreachable even in theory - not just hidden variable style QM theory, there are ways to get unreachable state even in 'regular' QM[0].
>And if that's the case, the question is where does that irrationality come from?
The universe itself. The linked PDF - you might be familiar with the author, he features often on HN - provides one method this is possible. Though, I'm not saying this is true, I suspect it may not be.
This is a profoundly counterproductive way to approach a debate and I won't engage with it.
I don't see how coming at a debate from first principles is somehow counter productive?
Shining light on the unsupported parts of the other person's argument is step one of debate.
I contend there is no proof of a machine capable of creation purely from the ether.
The first paragraph is just a non sequitar anyway.
Care to explain?
I imagine there'll be a lot more talk of the soul from a lot of techies from now on. Or at the least talk of the human touch from people who were erstwhile trying to get machines to automate everything.
Hand crafted goods have been a thing ever since industrialization of production started. recently, they market it as upmarket boutique to be handcrafted, and i suspect that ai generated content vs hand-crafted content is going to go down the same route.
Indeed. A very niche market, of course.
"Yes this AI generated forgot password flow is great, but it doesn't have soul."
I like it.
The chatbot should feel like it successfully battled inner demons earlier in life
Searching Google/SO for some obscure error message? Nah, thanks. I'll take the AI with no soul.
It won't only be error messages people will use AI for, but the generation and regeneration of whole applications; that is, your job.
What I find amusing is not the statement you quoted, but this one:
> The Not By AI badge is created to encourage more humans to produce original content and help users identify human-generated content.
How does this encourage humans to produce more original content? It may help users identify human-generated content - if they care. But perhaps more usefully it helps AI identify human-generated content to avoid training on its own garbage.
AI will just adding the badge to the content it generates.
> I’d struggle to find any human content that is not recycled from the past.
The word "recycled" attempts to carry a lot of weight here, and not successfully.
In 1905 Einstein published his famous four papers that could not have existed without recent work done by Planck, Michelson, Morley and Maxwell.
It would be ridiculous to describe those papers as recycling.
I think some of it comes from the smug disdain that generative art fans throw at traditional artists.
If anything, that smug disdain is a reflection of the perceived elitism of traditional artists.
A small culture war that was smouldering long before the appearance of generative models.
The elitism of having a Twitter account and a Wacom tablet? I've found traditional artists are very happy to help people out, especially in the digital art world. They just won't draw your anime waifu OC for exposure.
I think it's interesting to contemplate the process... when an AI tool "remixes" existing works to make a new work, is it "merely" using statistical processes to do so, or does it "understand" what it's doing?
But perhaps the process doesn't matter as much as the end result.
Maybe another interesting aspect to consider is: humans create new things (even if by remixing and deriving from old things) all the time, because they want to, because they choose to do so.
Thus far, these AI creations have been made at the behest of humans demanding them.
Perhaps that will, that desire, to make something new in the first place will remain a differentiating factor.
That’s basically the thesis of Beaudrillard’s Simulacra and Simulation, that in the postmodern world everything is a signifier, simulation, or remix of something else, nothing is truly original anymore.
It’s probably not completely accurate to say there’s nothing truly original anymore, but also probably true that the rate of discovery of truly novel and original concepts and ideas has slowed to the point it may seem that way. Possibly b/c all the low-hanging fruit as been taken, and only the more difficult discoveries remain.
A lightbulb is clearly a new invention when all you have are oil lamps.
You can retrospectively say we've always had a device that creates light from heat, like this 'new' lightbulb and metaphysically you'd be correct.
But the properties and design of a lightbulb are different to an oil lamp. That's new enough to be called original and Thomas Edison (and whoever else helped him) are the originators of the lightbulb design and they made it real.
An original thought? Probably not, an original object, yes. Very much so.
This is overly reductionist. If you are struggling then your notion of recycled is too broad. But your day humans are shockingly unoriginal, I mean this seems to part of this new fashion to downplay anything done by humans as of it us trivial compared to some imagined alien race Kurzweil singularity thing or something. It’s a bit sad but it is not a very well-founded position.
The fear comes from much deeper. It is the fear of the kernel of reality that doesn't have any form, only function, the profound Lovecraftian fear of a mindless force. Although, some people in the story did worship Cthulhu.
The argument should be about capacity for creation, and to me AI generation just doesn't seem like true creation, but more of a cheap magic trick.
Okay, but if that's true, does that make all figurative art remixes of these? That seems like a stretch. And where did they come from?
when human made content is "recycled" from past human content, it is not "recycled" using statistical algorithms, it's recycled based on understanding, meaning, aesthetics, emotions, and a whole list of traits that LLMs will never have.
> Whoever has emerged victorious participates to this day in the triumphal procession in which the present rulers step over those who are lying prostrate. According to traditional practice, the spoils are carried along in the procession. They are called cultural treasures, and a historical materialist views them with cautious detachment. For without exception the cultural treasures he surveys have an origin which he cannot contemplate without horror. They owe their existence not only to the efforts of the great minds and talents who have created them, but also to the anonymous toil of their contemporaries. There is no document of civilization which is not at the same time a document of barbarism.
Good artists create based on their experiences, not based on the content they have consumed.
Are you suggesting that the content they have consumed is not part of their experience?
Yes. Reading about something is not the same as experiencing it.
The difference is that AI is using pattern matching while humans use reasoning to come up with new things.
That means AI is not capable of producing meaningful _new_ content like discovering new mathematical theorems, because AI does not understand maths, whereas humans can come up with something meaningful based on _understanding_ of the content they have learned from.
This is why when you ask e.g. ChatGPT about something it has not been trained, it can only come up with garbage, whereas a human would likely be able to provide meaningful answer based on looking at the same training data, if that makes sense.
Complete nonsense. https://www.nature.com/articles/s41587-022-01618-2
Language models can generate novel functioning protein structures that adhere to a specified purpose. Structures that didn't exist before nevermind found in the dataset. The idea that there's some special distinction between the reasoning LLMs do and what Humans do is unfounded nonsense.
A distinction you can't test for (this so called "true understanding" ) is not a distinction
This is the same as getting ChatGPT to calculate something. It is likely that it can infer a correct result from the training data and give right "new" answer, but it doesn't mean it has any understanding of maths.
That's why these models like ChatGPT are trained on massive models, to hide the fact that the AI is actually very dumb pattern matching machine.
The only reason they found these "new" protein structures is because AI could match them to a pattern that it learned from the training data.
They even claim this:
> akin to generating grammatically and semantically correct natural language sentences on diverse topics
Just like ChatGPT can generate grammatically and semantically correct natural language, except if the topic is not something it was trained on it will output grammatically and semantically correct nonsense.
> ProGen can be further fine-tuned to curated sequences and tags
Which suggests there still needs to be a human that can reason to be able to curate the sequences, something AI can't and probably at this stage never be able to do.
This is something companies running these models won't openly admit, because that would confuse investors.
Our language, Tiger, our language, hundreds of thousands of available words, frillions of possible legitimate new ideas, so that I can say this sentence and be confident it has never been uttered before in the history of human communication:
"Hold the newsreader's nose squarely, waiter, or friendly milk will countermand my trousers."
A Bit of Fry & Laurie Concerning Language: https://www.youtube.com/watch?v=3MWpHQQ-wQg
"The Greeks invented everything, all else is variation."
I wonder if people have actually tried prompting chatgpt for lists of creative things.
make up a list of fictious creatures and describe them. make sure the names aren't reused.
These seem pretty creative to me? are they? I'm not a fictional guy so I don't really know?
can you be extraordinarily creative and unusual?
Sure, here are five highly creative and unusual fictional creatures:
Crystalline Leviathan: The Crystalline Leviathan is a massive creature that lives deep in the ocean. Its body is made entirely of iridescent crystals that refract light in all directions, creating a dazzling display. The Crystalline Leviathan feeds on deep-sea creatures and can grow up to hundreds of meters long.
Plasmic Shapeshifter: The Plasmic Shapeshifter is a shape-shifting creature that can manipulate its own plasma to take on different forms. It has no fixed appearance, but usually appears as a mass of writhing, translucent tendrils. The Plasmic Shapeshifter is highly intelligent and can mimic the voices and mannerisms of other creatures.
Quantum Butterfly: The Quantum Butterfly is a tiny, delicate creature that flits through the fabric of space-time. Its wings are made of pure energy, and it can move at incredible speeds. The Quantum Butterfly is believed to be responsible for some of the most unusual quantum phenomena, such as entanglement and superposition.
Cosmic Serpent: The Cosmic Serpent is a serpentine creature that lives in the void of space. Its body is covered in shimmering, rainbow-colored scales that reflect the light of distant stars. The Cosmic Serpent feeds on cosmic radiation and can survive in the most inhospitable environments.
Dream Weaver: The Dream Weaver is a mysterious creature that lives in the realm of dreams. Its body is made of ephemeral, mist-like substance that can change shape at will. The Dream Weaver is said to be able to influence the dreams of mortals, and can even enter their dreams to communicate with them. It is rumored that if you catch a Dream Weaver and release it, it will grant you a wish as a reward for sparing its life.
I mean, in some sense these are creative.
But they mostly read like someone churning out monster manual entries by pairing animal names with adjectives then putting in the obvious two sentences of description.
Nothing about it engages my brain or strikes me as brilliant, insightful, or what I'd usually mean when I say "creative".
Agreed, and I think you've put your finger on the thing that no one is talking about.
Further, the complexity of creation with a complete lack of creativity is... really creepy to me, and does the opposite of inspire.
I would argue most of the stuff we see even the truly brilliant stuff is the result of a long slog process that usually involving multiple people.
Those are all pretty standard, to be honest. Like madlibs for creatures, but with nothing really new.
This, I think, is the key bit of your comment:
> I'm not a fictional guy so I don't really know
HN (and wider social media) is filled with people who aren't familiar with [field] declaring that AI has surpassed humans in [field]; it's not a display of AI's new brilliance but the old[1] display of a speakers' ignorance.
Most of them don't display the same humility as you.
Plus you can prompt it to take those results and make them more unusual, evocative, creative. Basically use the same kind of creative pumps that human artists do.
>> I’d struggle to find any human content that is not recycled from the past.
"A man with a new horse visited the hardware store in Dagwood last week to try and find a new hitching post for his stable."
That simple sentence has enough entropy that I can say, with confidence, that is has never been uttered by anyone in the history of human civilization.
Right, and so do many sentences produced by LLMs. I don't think that's the bar anyone is using for originality.
Our universe operates under deterministic laws, meaning that all events are the result of cause and effect. As such, all new phenomena and entities arise from the preexisting conditions and interactions within the universe. In other words, nothing can come from nothing, and everything that emerges is a product of what already exists.
People have been saying this for millenia.
https://www.biblegateway.com/passage/?search=Ecclesiastes%20...
This is a longstanding question: see The Anxiety of Influence
I can only recommend the fantastic "Everything is a remix" series
AI generate different things, people choose the good one. Opinion is the value, just like Protagoras's "man is the measure of all things".
Today's AI doesn't learn, but Future AI will for sure be able to learn from novel experiences
I've mentioned something like this in other comments across HN.
For me, I actually really like that human content is recycled. The discussions around AI, art, and humans always talk about art as a separate product produced by the human. I feel like many pieces of art (not all, for example zombie realism) has a piece of the human inside of it.
An example I used in a comment in this post is Raymond Carver. His short stories are about blue-collar men in the mid-west. Carver was a blue-collar man from the mid-west. I find that interesting, and I liked that he pulled from his life experience to write stories.
I get very excited when I realize that the work I'm engaging with is recycled in some way. Like a song's chorus sharing lines from a separate artist, or a style that seems similar to a different writer I know. I love that. It makes me feel more connected to the creator and makes me like them more because we share similar interests.
An example that comes to mind for me are the manga Hunter x Hunter and Jujutsu Kaisen. The creator of Jujutsu Kaisen loved HxH and it 100% shows. The crazy powers, the complicated fights, the walls of texts explaining what happened in a fight, etc. All of those come from HxH and I love the creator was inspired by it. Sometimes reading Jujutus Kaisen makes me feel like I'm bonding with the creator over our mutual love of HxH.
With AI created work, I feel like we're missing the human touch. And as I pointed out in another comment here - I don't think it's hard to bring it back. Show me the prompts and the chat log. Why did you choose those prompts? Why did you ask the AI to write in X style? How does that X style make you feel?
On the other hand, maybe the AI prevents some of the human touch from coming through. I imagine much of the recycling that comes from humans is unintentional. The song you wrote on guitar has that riff not because you're copying a band, but because you've listened to so much 90s indie rock that you just make what you love - without realizing it.
Maybe in the future this changes. I'd love to see someone spend some time with an AI and mold it to such a way that what it produces can emulate the user's love and interests. Don't forget to show and share the chat log!
[EDIT]: To add onto this, I've actually done the above. I've played some text adventure games in the past, and I got ChatGPT to play with me. I told it that I had stat points (Health Points, Magic Points), that I had two types of attacks, and that there are three types of monsters. That experience invigorated me to write paragraphs on world building - just so I can feed it into ChatGPT and live in that world.
I seem some instances that are similar like this - but people just end up not sharing the prompts they feed into ChatGPT. I'd love to see those prompts. I've love to see what someone did to get ChatGPT to act a certain way.
“Plagiarizing our past to control our future” -Fran Blanche
A quote I heard recently that really resonated with me:
"Soon, asking a writer if they use AI will be like asking a photographer if they use Photoshop" – e.g., it's a foregone conclusion, and the best artists will generally adapt to using the best tools available
Exactly, it's a tool. The skill floor, as in the minimum amount of skill to produce something passable, has been lowered -- dramatically lowered -- but the skill ceiling is now higher than ever as a result.
It's similar to music. DAWs and samples didn't kill off music; instead, it made it easier than ever for a teenager with a computer and a passing interest in music to create a song and share it with a world. As a consequence, though, the standards for mixing and mastering have gone up massively; people don't really tolerate bedroom recordings with $10 mics any more. I imagine most amateur musicians in the 90s didn't know what a compressor actually did (I certainly didn't).
Seeing the results of talented artists who are experimenting with AI[0] makes you realize that there's still going to be a massive gulf between skilled artists using SD etc as a tool, versus those who think they can be artists just by putting keywords into an image generation AI and calling it a day.
[0] https://twitter.com/jamm3rd/status/1619896080619159553 https://twitter.com/jamm3rd/status/1633758455952703488/photo... (moderately nsfw I guess)
> the skill ceiling is now higher than ever as a result
the fear from the current professionals is that the new ai raised floor is good enough to be usable commercially, and thus, cut a chunk of economic value from the professionals.
There is no fear that AI stops people with the intrinsic interest in creation to stop creating - their ceiling and productivity would sky-rocket as a result in fact. And it would be _these_ productions that wow an audience.
Unlike in the 80s - most programmers no longer need to know assembler/machine code to do their jobs. The skill floor has been lowered - so that programmers like me can get a job.
Yeah, that's a comparison I've been thinking about. C lowered the skill floor, and then Java/Python lowered it even more dramatically, plus the explosion of open source tooling, cloud infrastructure, and general hardware improvements. The average modern junior data scientist could build a software application in a day that would have taken a team of senior C developers months or years to build in 1990. Yet we have not seen a reduction in the number developers.
I feel like the dev job may change, but as long as people have difficultly describing their requirements so that a programmer can understand, they'll have difficulty describing their requirements so that an AI can understand.
It's one thing to ask for a pong game, it's another to ask for a program that "does a kyc process" or something similarly complex
Right, I believe so too. I work in aerospace software, and, for as remarkable as the GPT code generation examples have been so far, I think that even if it could write aerospace software, it would be because humans have spelled out the requirements so precisely that we really aren't looking at any substantial reduction in (human) workforce.
You're unlikely to be able to ask an AI tool for "flight management software" and get what you want. This stuff is extremely complex, with lots of moving parts and interactions with other components and with government regulations. You may be able to successfully ask for some specific function, but again, in order to understand that level of detail to even know what to ask for, you have to be pretty involved. It can't just be some non-engineer asking for things they don't understand.
Never mind the challenge of certifying the resulting software for use in commercial airspace...
And while I think AI tools may be a help (not unlike Stack Overflow), I think it would be a really big mistake to start replacing novice junior engineers with AI tools, even if we could do so, because we need senior engineers who really understand this stuff, and most of the training for aerospace work is on the job.
> Unlike in the 80s - most programmers no longer need to know assembler/machine code to do their jobs. The skill floor has been lowered - so that programmers like me can get a job.
Assembly isn't that hard. You'd have still gotten a job, but your employer just wouldn't have gotten as much productivity out of you (so they'd either have had to hire more programmers or write less software).
The same goes for creating art without AI.
Not if creators are paid nothing for their work. I think inescapably, the profession of writing is about to take a big hit from GPT-X, with the few remaining journalists paid considerably less than before. There will be an exodus of talent, leaving... Microsoft and Google to write our news for us?
If you just copy and paste from Reuters then you're not a real journalist. ChatGPT can't travel to a war zone to find out what's going on there. So real journalists aren't going anywhere.
No journalist has a job where they spend all day copy/pasting AP/Reuters. This is a simplistic dismissal of the nature of working in a news organization. It would be akin to me saying that using code from a Stack Overflow response makes one 'not a real developer'.
Agreed, but only because I assume they’ve automated it.
Most small town papers are filled with verbatim Reuters reports.
> Most small town papers are filled with verbatim Reuters reports.
Not exactly. It's mostly AP reports, not Reuters, covering stuff from outside that small paper's area. If there's a flood of national interest in that small town, it's probably one of the reporters from that paper that will write the national wire story.
That's kind of the point of the Associated Press, lots of small local organizations banding together to offer comprehensive local/national/international coverage.
https://en.wikipedia.org/wiki/Associated_Press:
> Many newspapers and broadcasters outside the United States are AP subscribers, paying a fee to use AP material without being contributing members of the cooperative. As part of their cooperative agreement with the AP, most member news organizations grant automatic permission for the AP to distribute their local news reports. The AP traditionally employed the "inverted pyramid" formula for writing, a method that enables news outlets to edit a story to fit its available publication area without losing the story's essentials, although in 2007, then-AP President Tom Curley called the practice "dead".[6]
> So real journalists aren't going anywhere.
and there aren't very many of these kinds of journalists (let's call them "correspondents" to differentiate).
The journalists copy/pasting from Reuters, or the ones that paraphrase a press release, is going to go the way of automation, but i suspect that this amount of jobs is still quite large today, but will disappear relatively soon.
> minimum amount of skill to produce something passable
Look at the history of woodworking for inspiration.
It used to take human sweat to make flat straight surfaces. Something like a walmart flatpack bookshelf would have been VERY expensive to make 1000 years ago. Industrial tech made that cheap and ... now its seen as tacky and out of style and only for poors.
The result culturally this century is an avoidance of "plastic wood" finish, faux finishes and faux surface prep that look old and worn and hand scuffed, and a strange hipster fetish for "live edge" raw bark on finished product. Sure, in 1875 one might have predicted based on mass production trends that furniture in 2020s would be all right angles, smooth as glass mirror, photographic "ideal" woodgrain instead of real woodgrain, plastic-y lifetime thick film finish. But thats only for poor people who shop at walmart, nobody with money buys that stuff.
I suspect what AI can cheaply produce will rapidly culturally be considered lower class and trashy. Oh, sure, completely AI generated movies look very nice and sound very nice and are ever so trendy and well written, but special effects and yesterdays memes on todays big screen generated entirely by computer algorithm is only for poors. Now, us rich cognitive elites only attend live theater plays. Aspirational middle class types watch recordings of theater plays (admittedly maybe some AI generated). You wouldn't show off your 'wealth' and 'class' by taking a date to an AI generated movie, that would be insulting, you might take her to a live theater play... I'm sure there will be some AI incursion, maybe the lights will be run by AI or AI painted backdrops or some actors will lipsync their lines to AI, or roughly every 5 years we have to tediously suffer thru a new generation of rewritten modernized classics and it'll be worse in the future with AI... but in general human will be the status symbol because it'll be expensive.
I suspect being able to ask an AI to generate non-copyrighted commercial quality music will mean massive noise pollution (much like every room required 15 digital clocks in the 80s and every room required 15 super bright blue power LEDs in the 00s) followed by the only people making money in music will be live humans playing physical instruments.
Poor people try to get by and don't spend money and when they do it'll be on infinitely cheap AI commodities. Rich people spend money on unique stuff, not mass produced AI shovelware. Aspirational people spend money trying to look rich so they will spend all their money on non-AI stuff.
It is a tool only when used as a tool, in the sense you're talking about. For example, I can write a sentence, and being a non-native English speaker, I can tell there's something wrong with it but I just can't place my finger.
So I ask the AI to rephrase it for me. Now I can pick and choose an option from those the AI offered, or I can just keep my version if it turns out it's good enough. I feel this is an acceptable use, and you can still maintain authorship, just like book authors maintain authorship after an editor has overhauled their writing.
Your DAW example would work if DAWs could generate samples, beats, and whole songs by the click of a button. But are you truly the author, then?
> Your DAW example would work if DAWs could generate samples, beats, and whole songs by the click of a button. But are you truly the author, then?
They absolutely can, at least with 15-20 clicks rather than 1 at the moment. Drag in drum loop, drag in melodic loop, use Scaler2/Cthulhu/Captain Chords to come up with a chord progression, use a synthesizer preset to play the chords, drag in vocal sample, put on Ozone AI Mastering on the master bus, done. If you sent me back in time to 1990 with that track, people would be really impressed.
As to whether you can claim you're the author, this gets debated a lot. I'll just paste the famous goat farming quote.
> I thought using loops was cheating, so I programmed my own using samples. I then thought using samples was cheating, so I recorded real drums. I then thought that programming it was cheating, so I learned to play drums for real. I then thought using bought drums was cheating, so I learned to make my own. I then thought using premade skins was cheating, so I killed a goat and skinned it. I then thought that that was cheating too, so I grew my own goat from a baby goat. I also think that is cheating, but I’m not sure where to go from here. I haven’t made any music lately, what with the goat farming and all.
There's a natural stopping point from a musical perspective, and that's learning to play the drums. Every step beyond that only affects image or cost. There's a huge variety of commercially available drums, so anything you build will likely sound like something you could buy. Even if you make something weird, it will sound like a modified commercial drum, and there are commercial products for drum modification too.
I really like the goat farming quote, as we all have to start somewhere. Still, the takeaway isn't that you can find a song on the internet and claim it to be your own creation.
There's a GIGANTIC difference between a hammer and a button that when clicked builds a cathedral.
And there remains a GIGANTIC difference between what AI does today and its version of a cathedral-button.
It’s a bit more like a hand crank that builds a lopsided toolshed at this point. Which is absolutely incredible, to be honest, and couldn’t have been anticipated by many people five years ago, but doesn’t necessarily entail that the cathedral-button will arrive any time soon.
We kinda have to be patient.
There's also a gigantic difference between spending 10+ years learning to play the drums and recording my playing at a recording studio, and dragging in a drum loop from Splice. Listeners don't seem to care either way.
Look into the history of drum machines. When they arrived in the late 70s, they sounded terribly mechanical. In the next couple of decades, small amounts of variation were added to make them sound more natural, the way many photographers today are adding artificial artifacts to photos that have been cleaned up too much by cameras' semi-automated post-processing algorithms.
But how sterile will art become? Already most people become inured to mass synthesthesia, where pureed sterilized Disney-fied content is fast becoming the unavoidable norm in so much of our media (e.g. pop and country music). Today's AI yay-sayers are receptive to a synthetically generated window onto the world are looking only at the first step in what will surely be a progression -- from a highly variable world of human-wrought artifacts to a homogeneous digitally simplified model of the world, one that will evolve into something ever more 'palatable', until... what? Is that a trend you would welcome?
Not I. Especially with Microsoft and Google and other gigacorporations calling all the shots in the absence of informed enforced guardrails to constrain how this rapidly evolves.
i agree with the tool analogy. However there's something different between using a tool, and outsourcing the entire content production by only specifying the end result.
Whenever a customer ask me to develop an app, i don't consider that he did the job. I did it.
> As a consequence, though, the standards for mixing and mastering have gone up massively; people don't really tolerate bedroom recordings with $10 mics any more.
I don’t find this holds true for hip hop, the most streamed genre, whatsoever. There are countless examples of people making it big recording on a cheap mic in their mom’s bed room. One example is Chief Keef. A much more prominent example is Kanye who had terrible mixing on The Life of Pablo to the point where he released patches[0] for the album. He also recorded parts of Jesus is King on an iPhone[1]. To further the point, bootleg recordings and demo tapes of unreleased songs in artists’ back catalog continue to be popular with devoted fans. Average people like good music, even if the quality of the mixing and mastering is not stellar.
Even The Beatles released songs with minimal takes from a live rooftop performance on Let It Be; hardly an ideal recording set up. Although they surely had better mixing.
[0] https://archive.org/details/2016-the-life-of-pablo-updates
[1] https://genius.com/a/kanye-west-says-20-percent-of-jesus-is-...
Love the connection to home studios. Everybody was able to make more music much more easily - but it also inflated the music industry with a lot of mediocrity. So some will find ways to use AI for greatness, but most won’t.
The skill ceiling isn't higher. It's just in a different building next door to the current one. The same thing happened with the film to digital transition. The skill range didn't change, the set of skills itself changed.
The ceiling refers to what you can produce at the top end, not the specific skill used to achieve it.
> people don't really tolerate bedroom recordings with $10 mics any more
It's called bedroom punk now.
The problem with analogies is that you can pick one to say whatever you want. The analogy isn't necessarily true just because it feels right.
Asking a photographer if they use Photoshop creates a framing where the artist still did went out and took a picture to create the work and then Photoshop only modifiers their original creation.
But you could just as easily say "Soon, asking a writer if they use AI will be like asking a photographer if they have a camera". That sounds ludicrous, but that is exactly what generative AI offers: the ability to create content essentially ex nihilo.
No, using AI to write is basically having it do everything for you. It's not a tool it's a replacement. In the years to come several hundreds of thousands of jobs will begin to disappear from copywriters to fiction and non fiction writers. Replaced by roles such as marketers to historians. It's not photoshop for photographers is just going to push the vast majority of people wanting to pursue these fields out of the market.
I think nearly everything you said is likely wrong.
AI doesn’t “do everything for you” any more than word processors, music sequencers, or CAD does. AI allows people to operate at higher levels of abstraction, is all. Yes, fewer people will type repetitive copy in to word processors. But that’s fine.
You are mistaking activity for productivity. Few people go into writing because they enjoy typing. AI offers leverage, which is the most amazing thing for anyone interested in producing work.
Ask any professional writer: revisions are a chore. Authors often say a novel is 10% writing, 90% rewriting, and they don’t enjoy the rewriting part.
So, IMO, contrary to your doomer take, I think we will see more people producing better work because they can focus on the parts they love and not do the grind that was only ever a necessary evil.
(The exception being those who love the grind and not the product)
I dunno, I have a friend who has effectively automated her entire job at this point through ChatGPT, and her role would almost certainly disappear if her bosses were aware.
Netflix also infamously used AI to generate background images for an anime, which is work that previously would have put food on the table for illustrators.[0]
It's hard to look all of this and think that all the jobs are going to stick around. I know at least 5 copywriters who have made their entire living as adults writing banal junk for small companies. All 5 of those careers will vanish in the coming years.
[0] https://www.cined.com/netflix-uses-ai-to-generate-anime-shor...
>Netflix also infamously used AI to generate background images for an anime, which is work that previously would have put food on the table for illustrators.[0]
Sooner or later studios will use it to leverage the work of their artist. If the same quality of animation requires half the staff to make it, some will use it as a cost reduction, others will use it to make better and more complex art. Maybe we'll finally get 60fps Anime instead of the 10 - 24 that is typical now. Your favorite series that was cancelled after a single because they didn't have budget, now they can make 5 season of it with the same budget.
I work as a professional writer. Editing is a part of the process but for a large portion of work, editing does not exist outside the publishing sphere. Publishers will be unable to handle the sheer load of submissions given to them by "writers". More established ones such as penguin house have already determined that with increased competition they will likely see sales drop predictions for smaller workshops that have existed are believed to fold due to the scale of operations in the years to come. These problems already existed before AI or Chat GPT but due to the scale will make them much worse.
The end has come.
Using it to write vs. using it to help you are two very different things.
For example: sometimes I will pass a paragraph through an LLM and ask it to return it in a more "active voice". Then I _manually_ edit my original paragraph, if I wish, to make it seem more suitable for my current goals.
This is sort of like Photoshop for photographers.
On the other note, online submissions for publishers will be retroactively filled with spam due to the ease and ability to publish and submit content. It already happened with Clarke's World. The end is coming not a revolution. Just the end.
> "Soon, asking a writer if they use AI will be like asking a photographer if they use Photoshop" – e.g., it's a foregone conclusion, and the best artists will generally adapt to using the best tools available
The problem is that it automates the best parts and transforms the user into a curator/manager.
Some people want that, but other enjoy the creative part more.
To bring it back to photography, that's the problem with digital, you blast through hundreds of photos and spend most of your time selecting and editing at the computer.
With analog, and especially wet plate or direct positive paper, it takes a while to compose your picture and you only have one chance to take it and develop it. It's very easy to mess anything which is what makes the process more meaningful. And you're left with a unique copy, not an artificially limited single copy.
Yep, exactly - I think people generally regard artistic work as getting to a successful end result. When actually the process of creating something, and potentially wrecking something at any moment, is usually the most fun / exciting and fulfilling part. That magical process is lost with AI generation.
The "writers" who can benefit from AI are not artists. They might be bloggers or journalists or researchers. But there is no way that a generative AI can produce, say, your memoir for you. How could it? You have to say what happened. Your life experience can't be generated. Ditto for any sort of novel above the level of the most basic imitative genre stuff. I'm certain AI could spit out a perfectly passable airport paperback romance, for instance—but how could it possibly write Infinite Jest?
The key isn't that the "AI" writes the whole darn thing. It's a "get-out-of-writers-block-free" pass. I've successfully used chatGPT for exactly this process. I'm writing something — unique, by me! — and I want to link a few thoughts together in a succinct way. I outline what I want and ask the AI to generate some plausible prose.
I almost never use the prose word-for-word, but goodness is it helpful.
I’m not so sure an AI couldn’t, in theory at least produce a memoir if you fed it your diaries, emails, social media chats and histories, photos and videos from throughout your life, and so on.
This feels like a good point. I am curious how the "feeding" would occur here. If there's an API to do this sort of thing right now I'm not aware of it. But I definitely believe that it's coming!
The recently announced GPT-4 [1] now supports feeding it with more “context” when writing a prompt than previous models.
It supports a context length of 32k tokens (approximately 25,000 words or about half a novel).
But then the AI is the writer and you're just an editor or curator. So perhaps don't call yourself a writer if the AI produced a significant amount of the content. Same goes for Stable Diffusion/Midjourney prompters who shouldn't really call themselves artists. (Things would of course change if they do significant postprocessing of the output, though I'd still think this is much easier than coming up with something novel on the spot.)
Of course, it's not my place to say who should or shouldn't call themselves a writer, but I'll simply personally respect someone more if their stories aren't ghostwritten by the AI.
Are Pollock or Malevich real artists? Their work doesn't require great technical skill.
I feel a better comparison is like asking a painter if they use a camera. At some point we just say “you’re not a painter anymore, you’re a photographer.”
Feeling pretty stupid for all the time I invested in pinhole photography and darkrooms, 20ish years after photoshop was released, about now. Guess the best artist all have $5000 to spare for a MBP and monthly creative cloud subscriptions too…
In a test of verbal reasoning, photographer is to Photoshop as writer is to ...?
I don't think "AI" is the way most people would complete that sentence. Perhaps "Word"? Or "LibreOffice" if "Photoshop" were replaced with "Gimp"? Personally I use Emacs or Vim. You could incorporate AI into any of those tools, but how exactly? And would it be a core part of the functionality? And would it be something that the best artists make much use of?
If you used AI to make the spelling/grammar checker more reliable, probably most people would use that (assuming no privacy problems), but if there's an AI-driven autocomplete for sentences probably a lot of people, including the best artists, would turn it off, just like I turn off the autocorrect on a word processor today.
If an AI can complete the sentence, then perhaps the sentence isn't worth completing, at least if we assume that the reader is as clever as the writer and has similarly sophisticated tools. Perhaps the AI-driven tool should instead put some kind of wavy line under the second half of the sentence to indicate that it's boring and obvious and doesn't need to be there.
Leaving out “soon”, there’s truth to that, but the immediacy really depends on how this generation of AI develops and where it encounters its limits. It’s not there yet and we actually don’t know how far these current advancements will take us.
It’s tremendously useful technology for many domains already and you can see the stepwise refinements that will permeate many parts of our lives. Big money corporations are productizing those elements already.
But to be as fair to the pessimists as the optimists, its actually still a very big leap from Modjourney and GPT-4 to something that becomes the next camera or typewriter in terms of ubiquity. Because we saw a huge leap recently, we feel close and excited — and we might prove to be in hindsight — but we also might see that there are some hard conceptual limits that we won’t see anyone break through for another fifty or hundred years.
I don't think it's like any tool that has come before in that it can take much of the decision-making process away from the artist - it is much more like hiring someone to paint a picture than using Photoshop.
For image-making, there is a decision in the prompt and selecting images, but that is very different to making a decision about each color and brushstroke, and working to finish a painting. It's orders of magnitude more difficult and why great masters are celebrated. Creating stuff with AI will suit certain people, I definitely don't think necessarily the 'best' will automatically use it. It does seem to take away a lot of the fun of making images and truly original work will always push quite far outside the training set.
Exactly, but the implication is different for me. I dislike all the filters and think this is the weirdest thing ever, to blur oneself on photos. People, who heavily use blurring to mask their wrinkles, produce - in my opinion - photos that look like grimaces. I find the pictures as well as the people using filters somewhat scary, because what happens if you meet them face to face? Use something like Google Glasses and real time filtering?
For literature, it could be interesting, how authors use new tools, where and in what sense. Maybe have more conclusive plots? Less inconsistencies? Have AI imitate dead poets' style?
We will value the classics more I guess, since they were done the old fashioned way.
There's shades to art of course. Not all photography is airbrushed digital stuff, and the mild stuff most people probably do, like changing the exposure curves around in photoshop or lightroom is basically akin to what photographers were doing when exposing prints from their negatives for the past 150 years, only with less time and no chemicals. Likewise, some people might be happy with AI generated fodder to read, others might want other things.
I don't think an AI can do traditional art anytime soon :) robotics don't have the dexterity of human hands (yet), not at the cost of human artists
In a novel-writing application - you write the first paragraph and the AI can finish the chapter.
There's more oil painters now than at any time in human history.
New tools open doors, not shut them.
Cortes vs the Aztecs
Aside: "1491" is an informative book that might dispense with that particular metaphor.
Art will continue to evolve with AI.
Another argument I keep hearing (most recently from pg), is that we'll always need non-AI training data. That, too, doesn't follow. Training new models on synthetic data does not mean we get stuck in a particular mode or style. We'll continue moving, improving, and trying wildly new things. Bootstrapping with synthetic data doesn't block evolution - it enables faster evolution, even. (I'm using synthetic data to train new models to great effect.)
People are angry that we've lowered the opportunity cost barriers and so they're expressing their frustration.
It's a good thing that life's choices no longer fence us in as much. Everyone should get a chance to learn how to express themselves through art with the new regime of tools. Despite changing economics, there will still be a top 1% that do better than the rest of us.
People aren't angry/worried because they don't have a competitive advantage any more--people are angry/worried because they sense (I think correctly!) that AI will eliminate the part of their work that they find enjoyable.
Artists, by and large, don't do art because they enjoy having art--they do art because they like /the process/ of producing art. If that process can be done faster and better by AI, then yeah, sure, they /might/ be able to still do art for a living (some artists will be able to leverage their experience to maintain an advantage; other, less flexible ones will lose work)--but the work they do will likely not be commensurate to the work they were doing before, and will likely be less enjoyable to them.
The thing that worries people about AI is that it'll make all creatives into middle-managers.
To me, creative fine woodworking is making a cool idea based on the world around me and experiences I've had that balances engineering needs with artistic form.
Then I spend 95% of my hobby time tediously sanding boards and applying finish and this is NOT the fun part of my hobby.
I will say the only time I've gotten compensation directly as been for the latter because nobody likes doing that stuff.
I'd say AI will make creatives into micromanaging middle managers, the type that gets way too involved in a critique of your every decision is still going to micromanage, they'll just be harassing an AI instead of humans.
Fundamentally the creative act of making book cover art has always been specification. You can say to an artist "make me a book cover looking like ..." or you can hire a guy and give him no direction and trust him to decide what to do. But the creative act was the specifying, not the mechanics of turning that idea into printing ready art.
Another problem you run into is AI is as currently produced, a groupthink enforcement technology, and most people will be turned off by that. The blandest least offensive corporate dining experience is likely McDonalds. How much do hipster foodies prefer going to McDonalds over ... anywhere else on the planet? AI can only create the ideal big mac, and you can make money shipping big macs, but only selling to poor people, everyone else wants something else, even if in a strict corporate inhuman sense its inferior in every way to a big mac.
I think the same. Many artists hate because they feel threatened. I think the opposite: creating art (e.g. Drawing) was only for the ones who had drawing skills. Now, it's being democratized and everyone, regardless of being talented in drawing or not, can freely express their feelings and creativity.
I think it's a great thing.
In the US, your livelihood essentially justifies your existence. Disdainfully reducing someone to being a 'hater' when they react to that being threatened is exactly the reason many, if not most people find SV culture disgusting. They're not wrong.
Beyond that, if you think AI algorithms will replace art, you don't understand art. It replaces much of the commercial utility of many artists and creating many types of images, but the idea that it threatens art, generally, is preposterous. People who think art school is only about learning to physically make art are like people who think computer science degrees are only about coding. Most of what you learn in art school is conceptual thinking, communication, really deeply seeing things in a way that most people don't, and stuff like that.
Factories didn't replace potters when they started making pots and dishes or sculptors when they started making cheap home decorations-- it just meant the artisans no longer made money creating a commodity because most people were satisfied buying chintzy shit cheap. Many potters who lived through that change probably died in poverty because of it. Talk big numbers about the way industry shifts all you want, but not everybody can switch to an entirely different category of employment just because someone taught a computer to do it.
> I think the opposite: creating art (e.g. Drawing) was only for the ones who had drawing skills. Now, it's being democratized and everyone, regardless of being talented in drawing or not, can freely express their feelings and creativity.
There was never anything stopping anyone from picking up a pencil and learning to draw. The idea there's some inborn "talent" for art is absurd. There is only people who practice and people who don't.
What you are actually "democratizing" is the ability to produce finished products without ever putting in any kind of work.
Which means you are absolutely devaluing the people who actually did put in the work.
It is a terrible thing. A miserable race to the bottom of human expression.
People used to have to learn HTML and about web hosting to be able to publish text on the internet for others to read. There was never anything stopping people from putting in the effort to do this. All that website builders, social media sites, and forums like HN do is allow people to accomplish this without putting in any kind of work. It's a terrible thing that's caused a race to the bottom in the web, reducing the variety and quality level of most people's browsing experiences.
I feel like you are trying to make an analogy and show the parents position as absurd but I think what you have said here is probably literally true. That said, there is a big difference from tools that reduce friction to do a task to those that eliminate the need to do the task.
I think you're probably trying to use this as a reframing of my logic to show that I am being absurd
Jokes on you, I agree with everything you just said, fully unironically
The web is way worse where people have to put zero effort into uploading their content
The best places on the web are the places where people have crafted the page they want.
The problem you've identified is that using a skill is available only to those who have bothered to hone the skill. Why should we have to bother learning something? It's simply not fair. If I want to be an artist I shouldn't have to practise or educate myself. The simple desire to be an artist should be enough!
Note: I don't really care if people use AI to generate anything but they should be forthright about doing so.
> If I want to be an artist I shouldn't have to practise or educate myself. The simple desire to be an artist should be enough!
IMHO this is absolutely the case, and has always been the case. Being paid as an artist, on the other hand, is something entirely different, or having one's art recognized as good by other people. But I am sure everyone remembers other more popular artists telling them they're "doing it wrong" in art class, on forums, or even all the way back to preschool coloring books if someone would deign to color outside the lines.
There have always been those who seek to gatekeep what art is and who they view as legitimate participants in it. I suppose everyone does in some sense. But I am very excited by the prospect of art as a personal developmental journey becoming more open to everyone.
It’s not fair? It’s not fair that should need to learn something? This is really a sad state. HN seems to be becoming a pretty sad group.
It was said mockingly.
Hard disagree. Most people can draw if they put in the time but they don't. Artists did and bet their careers on it but now their effort is being trivialized by people who want a bot to "draw the owl".
This is all explained well by "Drawing on the Right Side of the Brain". People apply their childhood drawing symbols to realistic art in their teenage years and get frustrated. They never learn to see with an artistic eye and quit.
I used to think I was not "talented" at drawing until a girl I was dating gave me that book to read.
I was skeptical at first but after going through a couple of the drawing exercises I was convinced. They are so cleverly constructed to make you step outside of your normal brain and just draw, it's kind of amazing the results.
I'm never going to be an artist professionally but I do actually draw as a pastime now and I have improved substantially.
I think if I had practiced more as a kid and teen I may have pursued art professionally.
Part of the problem is that adults see a kid upset that their drawing isn't very good and say things like "That's ok, you're just not an artist. Not everyone can draw" and people internalize that. It's supposed to make the kid feel better and it might cheer them up in the moment, but it also convinces them that they cannot learn to be better at things they are not immediately good at.
Toxic Feel-Good shit.
Art is already the most democratically available act. Anyone can learn the skills involved. Its not a select few.
It’s about expressing and finding yourself and developing a soul through creative action, not just a squirting out a picture that looks “correct”.
You only shortchange your own soul’s development by trying to skip the hard steps that it takes to develop a skill like that.
Because you can make an image on a computer that looks like it was drawn does not mean you can draw. Hell, I’m not even sure it means you can express yourself. The inputs you contribute when using an AI prompt are so minimal compared to the informational input required to create something in the physical space.
Don’t let anyone convince you that it’s an exclusive act. Find a pencil and get at it.
I disagree on the "entering a prompt" part. By a lot. I've been managing to get myself lost (and somewhat unlost) in latent space. Let me tell you, a latent space can be a big place all by itself, even if you stick to 1 seed.
Ultimately though, I realize that trying to get everything what you want from a single prompt won't work. Tokens have implications, and they influence each other too much. Long term, one needs more than just single prompts, and the tools are still evolving to support that.
Still, you'd be surprised at how much you can express even now.
I don't think I would be surprised. You're just removing your own inherent and vast set of input and modelling parameters from the equation and offloading it to a more rigid model.
You're already the most complex model and model-maker—the procedure of developing a skill through a creative feedback loop and its output can be vastly more edifying for yourself and others. Don't take my word for it. You can only know by doing it.
These models have their place as useful tools, but are in no way a replacement for the experience of creating and all its challenges. You can't climb a mountain by paying someone else to do it for you. That analogy begs the question—what is the true value in climbing a mountain? What is the value in creating?
* If you put a few splashes of paint on a canvas you get something that looks pretty interesting for a minute, but it's not the picture in your head, and after a while all splashes start to look the same.
* If you grab some pencils and scribble all over a page, you'll get something that looks interesting for a minute, but it's not the picture in your head, and after a while all scribbles start to look the same.
* If you throw some geometric shapes into blender and run a render, you'll get something that looks interesting for -ok maybe a couple of minutes-, but it's not the picture in your head, and after a while all geometric shapes start to look the same.
* If you throw a few tokens into a prompt and hit generate, you'll get something that looks interesting for -now maybe a couple of hours?- but it's still not the picture in your head, and after a while all generated images start to look the same.
Each time, if you want to actually get anywhere close to the picture in your head, you're going to need to practice, and you're going to have to learn the medium, and you'll have a lot of experiences along the way. "There's no mistakes, just happy little trees", right? But the actual path -and the happy little trees you experience along the way- will be rather different for canvas, pencil, blender, or stable diffusion.
Have you ever tried to hold a 7-dimensional, non-linear, chaotic landscape in your head whilst trying to resolve nightmare images of deformed limbs and eldritch landscapes?
I don't think that if you want to accomplish art, you have to enjoy the process of these specific modes of art or you're failing yourself. And I think that's what a lot of AI artists want, is to have the results, and a big portion of that problem has been solved.
And saying that, I don't disagree with the idea that learning how to draw causes a larger mental development than writing prompts, and that prompters will forever miss out on that human development. It's just that not all people are motivated enough to "climb the mountain" of being a decent enough illustrator, whether hobbyist or professional. Some of them are happier with the process of getting decent results quickly than stumbling over themselves for years just to reach the point of what AI can accomplish now. Maybe the process of drawing is painful because of the way they treat the process, discouraging them from persisting, but it's hard to change people, and they're capable of defining failure on their own terms. And those people will always exist. Not everyone who wants to draw will end up enjoying it or succeeding after their preconceived notions are shattered. That in my mind means there will always be a target audience for generative art tools.
I think AI art could reveal a lot of diminished self-confidence towards life in general, and I could imagine that some portion of the people flocking to the generative programs would have become manual artists if they believed in themselves more, but they don't, and AI has won them over because they get to skip steps to have the result. Maybe they didn't want to do though the challenges of self-development to gain a new skill, but imagined artistry as nothing more than a fun hobby, and AI art aligns with that notion (mistaken or not) than doing months and months of practice, in a way that no technology has been capable of up until this point.
But does that mean those people have failed as artists and soul-searchers? I think that's still up for debate. AI could quickly become the new kitch (if it isn't already) but they're happy creating it, and many are happy consuming it. I don't define myself as an artist and have no insights as to the "value" of AI art as opposed to human art, since I'm a programmer who's stumbled into this other world and I feel that's a domain I shouldn't be making solid declarations about. But I believe both types of art have their place. Maybe a collaborative project between artists of both denominations would be useful in showing the unique qualities of both styles of creation.
My question reworded was actually: what is the result you expect?
A 'correct' image on a computer? Then you'll get just that and little more.
But I'm suggesting there is much more to be gained by taking part in a creative act as an experience and its myriad of facets.
Taking the stance that a language or 2-dimensional geometric model is a replacement for the latter is gross hubris. At best it can describe a potential outcome, but it does't provide any of the rest.
To the point about "accomplishing art", there is an old saying often repeated: "art is never finished, it is only abandoned". So if you think art is just something to be accomplished, a box ticked and then profited from then I think it is worth studying the subject further.
I think what most on HN would say is that you are being ableist and gatekeeping by saying those who pay others to climb a mountain for them are not mountain climbers. I guess it’s a sad state.
Nobody said that. We're having a pretty reasonable discussion.
> I think the opposite: creating art (e.g. Drawing) was only for the ones who had drawing skills. Now, it's being democratized and everyone
Drawing was for anybody who put in the work.
You may as well say football wasn't democratized until the release of FIFA International Soccer in '94.
I think people on HN would say a video game like that is ableist for those that take the time to learn how to play it. What they want is an AI to play it for them.
Training ESRGAN, I quickly learned that AI sniffs artifacts humans can't see. And I got particularly funky results training a model on selected previous generation upscales.
Hence I disagree. Even of the training data curators are perfect, there is a risk of "model inbreeding."
And the curators are not perfect. In fact, they seem to prioritize (by necessity) volume and speed over quality and curation when training these LLMs and diffusion models.
I’m not sure what the last line says. Doesn’t the top 99% do better than the rest? I think you are making the common HN argument that it is “ableist” to think doing art should require any kind of skill or something. I don’t really agree that the AI is so good but let’s say it is, if that’s the case why do you think anyone will afford a computer to use the AI or even a house in which to type or say all these magic prompts?
I get the concerns with AI, but this kind of thing has always looked odd to me.
Like imagine a label on a thing that says "Made without CNC machinery". So instead of finishing the thing on a CNC mill, they instead stuck it on a Bridgeport manual mill, and finished by hand. Or somebody finished the cast with a file.
Okay, and so? In the end, a hammer is a hammer. If it hammers well, what difference does it make how it got that precise shape? It's not like some inherent goodness is being imparted by a hand file.
Now I get some methods have flaws to them, and some form of associated harm. The problem I see with blaming specific tools is that it's simplifying the problem too much. Eg, if the problem is taking jobs, then picking up one particular tool to blame for that allows manufacturers to use a different tool and cause the same sort of issue. If the problem is say, pollution, then it's not at all a given that the replacement method will be ecologically friendly.
IMO the better thing to do is to target the underlying problem. If say, your issue is ecology, then you want to certify that the manufacture is as environmentally friendly as it can be, not that the thing isn't made from plastic, or there exists the chance that the non-plastic thing will even be worse for the environment.
I can see a difference for creative works. I'd be more interested to know that I'm looking at something an artist did intentionally than something sampled from a manifold of similar works. Just like I'd rather (all things being equal) have a real drawing or painting than a print. You can make analogies in other art forms too.
And if someone is going to send me a sales pitch or email that GPT wrote, I'd rather they don't insult me and just send me the prompt.
I think tfa is a gimmick, but i see value in knowing whether I'm relating to a human brain or to a statistical model
External help is not at all unusual in artwork. Eg, many artists just look for reference pictures online when they want to depict a tricky pose. Apparently the infamous Captain America picture happened because of a bad usage of a reference.
Or manga artists use things like screentones: https://www.youtube.com/watch?v=Q2U4EfKCfjI
I think AI will go down that road. Yeah, lots of people will mess around with prompts, but artists able to retouch the output, and to mix and match stuff will get much better results.
You could argue that the hammer from the previous example is a creative work.
A hammer, a pair of shoes, a leather sofa, etc. They all have an element of craftsmanship and it seems like people who do that work usually take pride in it from an artistic and quality standpoint.
> I'd rather they don't insult me and just send me the prompt.
and the prompt makes no sense to you, because you are not able to interpret it the way you would interpret the generated output of said prompt. The information isn't in the prompt, and never was.
I think the correct response here would be to reply to sales emails with double the length of AI generated and leading-on text.
> In the end, a hammer is a hammer. If it hammers well, what difference does it make how it got that precise shape?
"Hand made" has been a selling point since things were first made by things other than hands. Example: "The factory, known as the Ateller, [...] it is the place where our craftspeople assemble each BUGATTI by hand."
That's because "handmade" was a proxy for "high quality", in situations where machines lack the skill to meet tolerances and fine details and customization that humans can.
I never got the appeal of that honestly, less for something like a BUGATTI. Not like an artisan hammered the panels into shape by hand, or somebody manually bored those cylinders.
In fact it's probably something completely incidental to the entire thing -- they're not assembled by hand because it's better, but because not enough of the cars are made to make automation make sense. It's just a pragmatic business decision, which then some marketer figured out how to spin.
> It's just a pragmatic business decision, which then some marketer figured out how to spin.
Possibly but not necessarily. Automation limits what you can do as you are restricted to what the machines can do. Hand-made components do not have the same limits. Additionally, materials are often not perfect or uniform and a machine will work with different tolerances than a craftsman. Lastly, in certain cases the idea is the craftsman can discern the better materials and choose accordingly.
So for instance when making acoustic instruments, being "hand-made" means that the craftsman could tap-test the wood and use discretion on which pieces to use for each instrument. They would have examined the wood for both structural and form considerations. There would be special attention to assembly and anything custom can be done. Independent decisions would be made on a case by case basis instead of using a general rule.
Handmade also implies superior materials because when automation is used, often the inputs are the most expensive part. When you go to a handmade hob, the labor generally will far surpass the costs of materials so using premium materials is of little consequence to the final price.
But yes, marketing can be a big part of it in many cases.
> Like imagine a label on a thing that says "Made without CNC machinery". So instead of finishing the thing on a CNC mill, they instead stuck it on a Bridgeport manual mill, and finished by hand. Or somebody finished the cast with a file.
This is basically the selling point of the entire luxury watches industry.
I don't think there's a valid analogy in physical manufacturing processes like machining. Certainly not this. Few people are upset at the idea of using AI to help draft emails that you then edit by hand, or to help write boilerplate code.
a lot of people in this thread seem to have no understanding that some people enjoy the act of writing or of painting by hand. Every framing in here to the tune of “everyone will use AI” is nakedly utilitarian. Sometimes people pain pictures or write not just because they enjoy the act. Sometimes people paint pictures or write because they want to communicate; not everyone will feel like they’re communicating from themselves when the words or pictures come from a machine. Generating content from a dataset is a way of exploring the dataset; generating content from within is a way of exploring yourself. They are simply different.
and fwiw, a lot of the assumptions in here are akin to saying at the dawn of photography that nobody would ever paint any more. Photography of course replaced certain entire categories of painting, but didn’t erase painting from the face of the earth altogether. Of course there are far more photographs than there are paintings, but volume alone is not the totality of meaning.
Generated content is strictly culturally regressive anyway. After a hundred years of ChatGPT will people still be writing prompts with “in the style of [person who produced all of their work before the year 2000]”? That would be a sad and boring future.
> a lot of people in this thread seem to have no understanding that some people enjoy the act of writing or of painting by hand.
I have things I have written that took me two years for 8-10 pages. I wrote them to help me think through certain things.
Could ChatGPT have written it for me? Maybe. Probably not, though - I kind of had to discover what the content was supposed to be.
Could it have written it better than I did in two years? Probably not. Two years leaves you a lot of time to polish the phrases.
Could ChatGPT have taught me what I learned in those two years? No way.
I'm not sure what's your point. People that enjoy the act of writing or of painting by hand will still be able to do that.
They can, but they won't be able to compete.
When photography replaced many of the uses of painting, some painting jobs were lost, but photography ones were gained. Now, many creative jobs will be done by computers, leaving the world less enjoyable for people who enjoy creating.
Sure, some people will continue to be creative, but look at what happened to art after photography: we got modern art, where people just piss on a canvas and sell it for millions based on the name of the artist alone, not the value of the art.
AI is progress in the same way nukes are progress: you have to get it or you'll be left in the dust. But is humanity better for it?
> They can, but they won't be able to compete.
Compete for what?
> ...the world less enjoyable for people who enjoy creating
But why?? I think you measure everything only one how much you can sell something. Sure some art earns money, but many people do art without thinking if they can earn something out of it.
And to be clear, we got much more than just modern art, which you tried to nitpick here.
Jobs will be lost, people will still do it as a hobby. When I go for a swim or a run, I don't try to compete with general population, mostly try to improve myself. And that's what will happen for digital illustration and CGI business as well, to my understanding.
My glimmer of hope is if everything we see on a daily basis online is created by AI/AI-assisted tools, then we will go back to trying to be more creative in real life and our local surroundings.
Pretty messed up that it used to be that you'd put hard hours into running and exercising and then you could be faster than anyone else in your village, and now any asshole can just sit on a horse
Are you saying that photography is a bad thing?
Whatever they produce, even if it is technically brilliant, will just be lost in the ai generated noise.
From an article posted here recently[0]
> Art is not what I think when I’m painting. It’s what you feel when you’re looking
I disagree. It's both. If I'm playing the piano alone in my room and nobody else listens to it, I'm still having the aesthetic experience called "art". Many people make art just for themselves without the intent of showing it to anybody else, and it's not somehow less artistic than something that other people see. Art is an experience somebody has, including the artist.
> If humans stop producing new content and rely solely on AI, online content across the world may run the risk of becoming repetitive and stagnant.
Yeah, because the Netflix catalog, for example, is not repetitive and stagnant.
These current advances will enable anybody with a unique idea to produce content. We are right before an immense explosion of human creativity.
Creativity is not equivalent with digital output volume.
I think we’re going to see an explosion of waste.
We are information processors. The input makes the output. What happens when you close that loop?
You can observe this trend w/ AIfluencers already: They create short guides on how to leverage AI tools like chatGPT to generate income/reach and all of their typical bullet points equal generation of low-quality noise:
Examples: let chatGPT.. 1. write blog posts 2. generate scripts for youtube videos 3. create short stories so you can sell them as a book
Painful to see. I would like to think nobody in his right mind is willing to consume these delightful pieces of work. Dead internet theory on the next level.
At best it's annoying noise, at worst it's a psychic trap that will confuse and inhibit the personal development of many.
That's only because the Netflix bean counters end up working like an AI in their ends. AI trains on known things and outputs something from that finite set. Netflix bean counters similarly bet on known things or derivative things and the output reflects that. The difference is, scripts for the unknown things still occasionally get produced in this later situation, which is exponentially more than can be said if 100% of our content came from AI.
If you just go into the Midjourney Discord you'll see people with absolutely no mechanical artistic talent be create absolutely stunning novel art. The creativity people have there is crazy and the AI is letting them go from head to canvas in a few seconds.
When I see AI art from those communities, it all looks the same to me.
People tend to overestimate their talent, and get overly attached to their own ideas. So when a tool like midjourney/stable diffusion generates beautiful artwork based on a low effort prompt from the user, it's comfy to think that they had a big part in it.
The reality is that almost everything those tools make is equally gorgeous/amazing regardless of who uses it, so they're effectively interchangeable. As a tool for personal expression, it has a lot of value because of the impact it can have on the individual using it, but as art it's pretty worthless. A generated AI image could potentially have some cultural impact or value, but not on the merits of "it looks pretty", since they all look like that.
> as art it's pretty worthless
My experience browsing midjourney supports this. It's amazing how quickly you go from "wow" to "whatever". But that's probably cold comfort for a huge number of jobbing illustrators. Paying clients typically don't want art.
We're already past the point of diminishing returns on "human creativity".
And I would argue that the current (mostly recent productions) Netflix catalog is indeed repetitive and stagnant. Originality in tv production is currently in a race to the bottom.
If what you are saying is true, how will anyone afford it?
Ironically, the badge sounds robotic "written by human". How about "written by a human" to make it grammatically correct? And while we are at it, why are the letters AI the biggest thing on the badge? Seems backwards. I suggest flipping the emphasis so "written by a human" is bold and the "not by AI" part is smaller to indicate its inferiority.
So cringy. Should we also add badges for doing it without autocomplete, syntax highlighting. Hell, why even bother with text, if you didn't develop it by punching cards then you're not a real developer.
I think you're being unreasonable. Your examples are closer to spellcheck and punctuation correction. Even they have subtle societal considerations such as Americanising English. A much more appropriate comparison would be Copilot which is being discussed nowadays.
It’s the same thing we heard from musicians who didn’t want to learn technology in the 70’s: using a synth or sequencer means it’s not “real music”.
Those purists are mostly dead. Everyone uses technology in music today. And music is better for it.
Synthesisers and sequencers don't write music for you, that's not a meaningful comparison.
There are still purist being born (not me tbf). And I'm glad for that.
Are there really still purists who don’t just reject technology for their own music, but who insist that any music that uses technology is somehow not real art? I haven’t met anybody like that in 30 years.
> in the 70’s
> Those purists are mostly dead
Genius argument there buddy
I don’t understand the point of your insulting response. Younger generations grew up with technology. Just being part of music. Music goes on just fine, nobody makes that stupid argument anymore.
People born in the 70s are 53 at most. They are almost certainly not "mostly dead" yet.
Nobody said anything about people born in the 70's. They said "musicians who didn’t want to learn technology in the 70's", which is clearly different.
Those musicians were probably 30+ in the 70's, making them 83+ now. Life expectancy of someone who reaches 30 is 48 years, so yes: most of them are dead now.
And almost most of us are not.
I unironically agree
It feels like a good time to read this again about making music properly :)
> I thought using loops was cheating, so I programmed my own using samples. I then thought using samples was cheating, so I recorded real drums. I then thought that programming it was cheating, so I learned to play drums for real. I then thought using bought drums was cheating, so I learned to make my own. I then thought using premade skins was cheating, so I killed a goat and skinned it. I then thought that that was cheating too, so I grew my own goat from a baby goat. I also think that is cheating, but I’m not sure where to go from here. I haven’t made any music lately, what with the goat farming and all.
Can confirm as a musician this is a very natural progression. Amazing what people out there can do when they are willing to bend some of the rules though. ;)
The way to properly make it is to have someone else make it and claim it as your own :)
I am a professional writer (not in English language), novels published, short stories and the like, some awards, etc, nothing fancy, nothing big. I pay bills writing for corporations and other customers.
I just want to say that some answers here are like when artists talk about engineering, we simply don't understand the topic and it shows.
As someone who has been both an artist and an engineer I could not agree more. It's fascinating reading commentary on the arts from people who have always been STEM types because their entire concept of art is completely different from that of people who consider themselves artists. They constantly conflate creative problem solving and creative expression, which are two fundamentally different processes. I don't want to denigrate anyone here but I do sometimes get depressed by how artless people in the tech world are. I value art (in all its forms) because it connects human beings across space and time through our shared experience of the human condition. The best comment I've seen so far in this thread is the one about how AI art is meaningless because "AI doesn't fear death or loneliness."
It looks to me that because society places more value on STEM, a lot of people in STEM (especially that T, as you can see on this site) end up acting like "gift and talented" children and sprinting away with it. With gifted and talented, the charade falls away by college; not so with Big Tech.
There's a sadistic glee in proclaiming people nothing more than biological machines and acting like a scolding parent whenever any emotion is brought up in any form. I always smile whenever I see comments from one of these types of people explaining how fulfilling they find their lives. Sure, buddy. :)
1. There is nothing to stop someone putting this on >10%-AI generated material.
2. The same argument could be made for "Not with a Computer" and invalidate this website based on identical principles.
3. The future AI brings is a huge unknown, but as we've seen with every major technological advance so far, it's never been nearly as bad as the most fearful and skeptical thought it would be at the time this was unknown.
I think you could make the claim from #2 but then you will miss the point and the big picture here. I guess it’s just myopia. There is a difference between the scales of using a computer for some work versus whet is discussed here. As well, in #3, actually it is the “supporters” or fan boys/girls that are proclaiming AI shoes human intelligence is nothing abs reduces to “probalistic sparks” and gleefully rejoice in economic collapse or something, not simply the “skeptics”.
Not sure if this is good idea as it provided self identified data from bots to harvest
But still, content creators get badges/s
I generally agree that it will become increasingly harder to distinguish non-AI generated content and authenticity will suffer a lot. Maybe the solution to this is get connected with other humans directly and provide authenticity as first hand experience.
From consumer’s perspective they don’t care where it came from. In commercial setting(read: 99%) it’s always get stuff cheaper and better. AIGC fits both, the only better options is run faster than the waves and hope it don’t take you(i.e. “learn to code”)
This is the “disruption” we tech and VC has been talking XR/Crypto/5g/metaverse is about. Except it came with large body counts for those who are directly impacted by SD/Llama in the wild,
If consumers don’t care then where something comes from them why companies waste ink on “ethically sourced” “sustainably farmed” “fair trade” or whatever? Isn’t that a foolish waste of dye?
I would argue "organic", "fair trade" is more a marketing-feel-good problem than actually solving the problem. There's always many if and buts which complete nullify the effect of the original label means and manufactures are more interested to lie to consumers.
Somehow all this AI cannot break out of the advertising/spam mold it seems. All the billions made in high-value markets just cannot be reached so far.
Well it takes around 10 years for software to figure out the best monetization model. I suspect we will need a few years before the dust settles
I fully believe that once AGI gets in the hands of the common people, then internet communication will stop being as popular. At least communication in the form of talking to strangers online. It will be too hard to determine if you are actually talking to a real person or not. And people also may feel too worried that their likeliness will be used by an AI model to generate deepfakes of them.
I doubt the bots will believe these badges any more than I do.
Well, at least we won't make AI learn on AI-generated data because that, I imagine, would deteriorate quality pretty rapidly.
“I imagine” is carrying a lot of weight there.
AI is already trained using AI models of what humans would think of a proposed output. That has been a huge leap forward, not a deterioration.
People are already training SD models on Midjourney output.
They don't look that good, maybe they should get Stable Diffusion to generate a few alternatives, some will be better.
More seriously, even if we are willing to assume that everyone will be honest about how much they use AI, how do you define whether an image is less than 10% AI generated? Number of pixels? Number of objects?
And what about writing? Does getting an LLM to check your grammar afterwards count as cheating?
>Understanding that there is a blurred line between what is considered AI-generated vs human-generated, if you are a content creator, such as a writer, researcher, artist, music producer, sound designer, or filmmaker, and you estimate that at least 90% of your content is created by humans, you are eligible to add the badge into your website, blog, essay, publications, resume, or whatever your project is. The 90% can include using AI for inspiration purposes or to look for grammatical errors and typos.
This is subjective as all get out. If you write a novel and use Stable Diffusion to draw the cover, does that make the whole novel less than 100% human-generated?[0] What if they had used, say, lots of prompt engineering and inpainting runs instead of just typing in what they wanted and grabbing the first thing they saw? If, say, painting programs start using image generators as brush engines, does that lower the percentage more or less than the inpainting case?
[0] Keep in mind that most covers are designed and drawn by publishers, not the original writers. Writers making their own cover art is very much a self-publishing thing.
> If you write a novel and use Stable Diffusion to draw the cover, does that make the whole novel less than 100% human-generated?
I have never considered the cover art for a novel (or most books) to be a part of the novel. The thought that it might be never occurred to me. It's the packaging. So, in my view, such a novel would be 100% human generated.
AI art created by amateurs is a novelty.
AI art created by talented artists and writers is the future.
If a reader is unable to distinguish between the two...
Then does it matter?
But I’m confident that no amount of AI will make the expert obsolete. Like Photoshop, it’s going to be a complex tool you have to learn to master.
s/AI//g
Plot twist: this is secretly sponsored by some of the big AI players so that they can easily identify original human-made material to train future versions of their AI on.
The grand irony is if these ugly badges are widely adopted, AI will plaster ghosts of them just like it does with the "Getty Images" watermark today.
We as humans have had this argument with automated manufacturing, its a good reference point to predict human behavior.
Second is art itself. We have high quality art prints that a large portion of people are happy to buy. Artists themselves are happy to sell prints of their own work. That product is entirely machine made except for the initial knowledge work by the artist. I'm not making a art, not art argument. I'm saying, people are happy to consume manufactured content, companies are happy to generate it for them.
The argument that 'humans will stop producing and things will stagnate' is totally non-serious and doesn't even deserve a retort really..
Feels like it's meant for a sticker a robot will put on my forehead after killing me in a dystopian future.
An analog to the sudden belated question of the extent to which human language is novel (or not),
is a precursor to a more existential revelation our culture is going to have,
the extent to which the entire precept of the rational self-aware agent that we carry as our model of ourselves, is a confabulated falsehood maintained by our own minds.
Most of what most of us do most of the time is habitual, instinctual, autonomic, pre-conscious, whatever—very few of us are present very much of the time.
Even highly-analytic complex logical reasoning can and does often transpire in something akin to a "fugue state," indeed that is a much-noted aspect of technical work like most here engage in.
That doesn't mean that we are not capable of genuine self-awareness, introspection, and methodical reasoned thought—but I don't think it's controversial (except in our lay conception) whether or not those things are primary or common modes of being in the world.
Record yourself for 24 hours and compare honestly the sequence of your utterances and behaviors with your internal record of agency, and you'll find you are on autopilot in some sense most of the time.
From this I infer that the first "real" seeming AGI will like us be not monolithic capable systems, but relatively loosely coupled aggregates, with many components serving as analogs to aspects of mind and embodiment largely distinct form and only loosely coupled to the "executive" function.
This is a serious question: was this written by ChatGPT?
Gonna start slapping this on all my AI generated content
Just remember to follow their guidelines for proper badge formatting.
This is a good idea. I already did something like it to my works.
However, the items I added to my works have a stronger claim than 90% AI-free: I used "100% AI-free organic content."
I think I'll keep this stronger claim because I can; I have not tried any of these tools. Not even once. I refuse to.
I am betting on the fact that if I don't, I'll start to have more influence, like Paul Graham said a few days ago.
This feels naive and perfunctory, at best.
If, by some chance, this actually caught on and was some kind of indicator of quality, it would be in a predictive model's best interest to integrate the logo into any kind of web design it produces. Furthermore, no human-curated content farm would hesitate to include this symbol, regardless of the content's human 'purity'.
I wonder how fast this badge will get into datasets, and AIs, which will then try to replicate it - as part of the content they generate
So I can just, slap this onto my AI generated art?
A similar project https://ai-label.org/
I was tempted to do something similar. But then I changed my mind for a simple reason. Because nowadays all kinds of content and technology is labelled as AI. Sometimes even if it hasn't been touched by AI at all. So I decided not to label anything I do because it is done by humans.
Anyway, nice idea. I like it. Kudos to you.
Yours, KodingKitty
> So I decided not to label anything I do because it is done by humans.
I also liked the idea behind the label, but on second thoughts the whole idea behind a label sounds counter intuitive to me. Like there's some kind of truth in sticking a label on something. So, should I trust a label then? Is there some authority in a label? Do I even need authority to point out what's real and what's not? What's reality then? Does it even matter? Anyway, you get the point. A lot of questions.
And then I read further on that site, and it tells me how to use the label. Don't change the label. Don't change the color, don't change the text. I think it's human to break rules, in one way or another. Or at least to push the boundaries. So - to me - it's more human to change this label, and not use it as intended by the creator of the label.
Maybe that's the difference between AI and humans. AI will follow rules (set by it's masters), whether it allows for randomization or not, it will follow rules. Human beings - sometimes - break rules.
Setting aside the conceptual criticism, the execution of this badge is also done poorly.
If the goal is to highlight creation by humans, I’m struggling to see how slapping a big fat sticker that says “AI” in large type on it would be a good idea?
All it does is give AI more attention and associates the work with it.
> The Not By AI badge currently does not hold any legal effect significance
Of course it does. If you make a purchase based on this badge and you later find out it's false, that's fraud. You don't need to have a certification program before false advertising becomes illegal.
"Made in America" vibes. It must be better because we made it...
There's a sort of insecurity about this. And I get it, people are freaking out. I feel unsettled in many ways and I truly believe this technology is profound. Wait until it's realtime and embedded in everything. But I don't believe it's a cause for concern. I don't think we need to legislate transformers trained on public data as something illegal, for example.
I think it's a tool, a fantastic tool and a powerful tool, but a tool none the less. It will democratize many things and it will lead to increased productivity and creativity. But we, humans, will find more things to do; ways we can't see right now because it wasn't possible and still probably isn't.
I think iou we missing the point of the “made in America” thing but it’s ok. It’s not surprising.
Was/is a way to differentiate a product from foreign (at a time Japan, now China) made products. That because it was made here (if you’re American which I assume readers here are and if not then are European and are aware of American culture because they imitate it daily) it is better and if not then at least supporting “your kind”.
So made by human or w/e is a similar concept.
I scripted a tutorial video, recorded it, and edited it. Then I used WhisperAI to generate a transcript for subtitles . The question is: can I use “not by AI” on the video, or did using AI for the transcript make it partially the product of AI, and not really mine?
I'd say yes as the transcription is not transformative on the original, it's also possible to check whether it's identical or not.
I think the distinction will increasingly be hard to define. If you take a photo today using the latest iPhones, the camera module is doing a ton of image processing, detection of humans in the frame, brightening of faces, etc. The idea of a “no filter” photo is sort of impossible in this world; the camera has already applied a number of filters to improve the contrast and colors in various ways (even if the goal is to match “reality” or what your eyes might have seen at the same time as the shot was taken). Is this AI?
Soon, many digital image editors will incorporate some simple AI-powered features to detect edges, faces, various objects, etc. Are content-aware plug-ins to Photoshop AI?
> Soon, many digital image editors will incorporate some simple AI-powered features to detect edges, faces, various objects, etc. Are content-aware plug-ins to Photoshop AI?
This happened years ago. Agreed with the first part though - there is almost literally no possible route to avoid AI on consumer phones.
I like it. I like the idea of not having a war on AI-generated content, rather, simply declaring that a non-AI-generated content is what it is, while being OK with AI-generated content too (unless they abuse the badge, of course).
Don’t a few countries have a law/code for photoshopped models for instance at the moment?
Yes, for example in Argentina it's mandatory to add a text legend that warns if the human figure on a graphic publicity has been been retouched
That sounds like a good solution I think. “Warning : synthesized image : human, background, 50%”
Gives a bit info.
> Artificial Intelligence (AI) is trained using human-created content. If humans stop producing new content and rely solely on AI, online content across the world may run the risk of becoming repetitive and stagnant.
This is an absurd thought. AI content is still created by humans using AI, just like photos are created by humans using cameras.
Also who says that AI can't be as creative as humans or better? Why would humans be better at being creative "manually" instead of with machines?
I say AI creativity will be better than simple humans at creating stuff. Humanity will overall benefit from it and the world will be less dull.
> If humans stop producing new content and rely solely on AI, online content across the world may run the risk of becoming repetitive and stagnant.
I doubt if this concern is valid for humanity. Creating is an innate desire of human beings. We are going to have lots of AI-generated content, but I have confidence that people will continue to generate high-quality content too. Of course, how to find such quality content will be a different matter in the future. Maybe an industry-standard watermark or metadata for generated content is a cheap enough solution.
Some people are talking about how AI hype is just like NFT hype (with whatever reasons).
Well, to me, these anti-AI things are much more like NFT/crypto. It's just trying to create novelty out of nowhere.
Nice idea but it feels more symbolic than anything that can actually be enforced.
To avoid AI content means intentionally avoiding sites that are designed for cheap monetization and nothing more. That includes most blogs today; the ones that aren't personal websites. Most 'review' websites are exactly this; they offer a generic description and add an Amazon affiliate link.
We won't know for sure what sites are using it and which aren't, but knowing the nature of the site (commercial/non-commercial) is a pretty good proxy.
Well you could just start off by signing your name under the work instead of slapping some anonymous group's badge on your work. In fact, this page seems like exactly the type of underhanded trick an escaped AI bot would play to get more human generated content to train on...
The kicker is the "90% of content will be generated by AI by 2025" quote by an "expert" which links to a Yahoo-TV clips of some rando influencer spouting nonsense. Better luck next time, ChatGPT!
Stupid purpose, made by AI or not shouldn't matter. What matters is creativity and if as human you can't beat AI's then you should question yourself.
Ironically, this celebration of originality itself recycles a now-familiar, incorrect meme—that generative AI just interpolates/remixes/collages between existing work.
It does a lot more than that. It is capable of novelty and striking out in novel directions per the AI artist’s direction. Maybe there’s some soul-grounded capital-t truth forever out of reach, but that’s metaphysics.
It only appears novel because it mixes so much data.
People have always wondered if even people are creating anything new ("nothing new under the sun"), so it's quite silly to say that a model which produces data based on its training set and prompt is doing anything other than mix.
Some day AI will be able to create by itself, but the current state of the art isn't doing it.
> Artificial Intelligence (AI) is trained using human-created content. If humans stop producing new content and rely solely on AI, online content across the world may run the risk of becoming repetitive and stagnant.
Perhaps somewhat ironically, if we start identifying all human-created works as such, it creates a new set of pristine training data for future generative AI/LLMs.
This line of thought is harmful and helps nothing imho.
I thought I’d have an open mind and read the “why” section.
> it helps your audience recognize your human-first approach
So, no. Refraining from using AI is not a human-first approach, when using AI can actually help significantly with human problems.
The badge is a false and outmoded signal which says more about the user being a virtue signaling type personality.
This ignores that the selection of the generated results being published is part of the creative process. AIs will not learn off an iid sampling of AI output, but of one that is extensively shaped by human preference. And because humans like novelty this will never converge as rhose preferences are a moving target.
I'd appreciate something a bit more inclusive of alterhuman identities, but that's just a super minor nit.
Other than that, it doesn't seem like a bad idea for people to be able to mark their work as "created by a real person" with a cool little badge like this. It's simple but it works.
If this gets popular enough, then AIs scraping the Internet will get trained to insert the badge in their content.
Involuntarily, this would help the AI get better with letting scrapers discriminate human and AI contents.
Makes me think about Real humans [0], in which artificiality becomes a political issue. I only saw the original swedish version of the show.
Note that "Real humans" are a "far-right political movement" according to Wikipedia.
> Use this badge if your artwork, including digital and traditional art, contains less than 10% of AI output.
Isn’t this already a “made by AI” artwork?
And how much is 10%? If a traditional painter that uses photo reference starts using AI generated photorealistic imagery instead, is that more or less than 10%?
The biggest misunderstanding is that AI content ends up being limited in scope and creativity, and endlessly recycling AI content and feeding it back into more AI will somehow create a world where all content feels cookie-cutter. This is far from the truth.
* The number people who care about whether art was AI generated is only going to decrease as time goes on
* If there is any kind of real value to having this badge, then people will just use it even if they're using AI
* Rejecting technological advancement is like begging for extinction
There's a typo in the "Mission" section "Only by limiting the reliance on AI and continue to create original content can propel us forward as a species". An AI wouldn't have made that kind of mistake
Humans will not stop creating due to AI, instead AI produced works and AI assisted works will contribute to the total human output alongside non AI works.
No need to make websites or manifestos, do not worry humans will still human!
To be useful, this needs to be backed by an identity certificate (like SSL), ideally signed by a root authority like a state which can reconcile with a birth certificate, or some other form of state-issued ID.
Lol I had a similar idea and registered certifiedaifree.com and withoutai.com.
You could make one a registry of sites with such content, and the other another badge. If you make a new badge, you should make it a super retro-esque 90s era site badge.
<blink>NO AI BADGE CLUB < blink>
Well, i've heard our ancestors were afraid of books when they came out, then they were afraid of movies and cinemas, then some people were afraid of the internet. The story continues with AI.
Thanks, now they can find original works they can train on by looking for these images.
Anyway, I like the initiative. We'll need much more to make a digital divide that can guarantee human autonomy.
Seems like it’ll be on the wrong side of history when AI “people” have rights. Then these badges are like speciesist? Think of Data on Star Trek. He can’t make content with a human badge?
Do you think half of the people using the tools now want them to have rights once they achieve AGI? The entire point of these people are going for is they want a race of slaves because enslaving humans is now illegal.
I want a "Created by AI" badge for my social media content, because I want to show off my skills with creating AI content (and educate users about what they are consuming).
What skills are you using in creating AI content? Did you create the LLM?
Creating prompts to get good results is a skill.
- For text prompting, a good starting point is reading the whitepapers in this repo [0].
- For AI art generation, a good starting point are AI prompt libraries to see results of good prompts.
Fine-tuning (and creating the examples for fine-tuning) is also an interesting challenge.
- The Stanford Alpaca repo [1] made it really easy to fine-tune LLaMa based on your custom dataset.
[0] - https://github.com/promptslab/Awesome-Prompt-Engineering [1] - https://github.com/tatsu-lab/stanford_alpaca
How much AI content can be in the AI's training data before it just starts to suck? If 90% of the internet in 2025 is AI generated, then how will a new AI be trained?
Maybe we should have a "MOBA"[0] badge?
[ Art Too Bad To Have Been Done By AI ]
Why not NOT_WITH_COMPUTER just to show how ridiculous this is?
SO, iPad??
This is a covert operation to get human generated contented to be tagged appropriately to be incorporated to the next training set for GPT-5/SD3.0.
/S but only a bit.
Is this effort sponsored by OpenAI so they can auto-label their crawled data as human-authored vs everything else? <adjusts tinfoil hat>
Is it really that black and white? What if you do your offline research using AI? What if you took a course about the topic on Khan Academy aided by AI?
I think that’s why they use a percentage …
Seems silly, I care about the quality of the content. I don't care who, created it, just if it is good, and if I can use it, learn from it etc.
How long until we get an ESG-like board of bureaucrats and oligarchs labeling companies as "AI-safe" or "AI-friendly"?
I wish there were a way to flag my own sites as illegal for use in training LLMs and other AI systems.
If there is such a way, someone please let me know.
funnily enough, it think this is a good development for our species.
In the 2000s, if you were talking on the internet, you were pretty sure that you were talking to other humans.
This will quickly reverse as in being sure that one is talking to AI most of the time on anything remote (internet, phone, email, ...)
Which in turn will massively strengthen personal (like non-remote) face-to-face interactions.
Or more likely, end them entirely.
You can already get banned from social networks by AI.
> The following guidelines provide instructions on how to use the Not an AI badge.
phew. need GPT to show me how to properly use the badge.
Sites like this are the problem, not the AI itself. People should not have to justify themselves like this.
Very cringey. Not helped by how it also dictates how you should use it. Why do they care about the shape or the color?
The branding appears to be inspired by Apple's Made for iPhone branding (https://9to5mac.com/2018/03/10/apple-updates-made-for-iphone...) and Apple is indeed assertive of proper use of branding.
Of course, a juxtaposition between this and Apple is very silly.
I say we should all carve text on stone tablets. If it was good enough for God, it should be good enough for us.
Why would anyone trust that the content labeled "not by AI" is in fact not written by AI?
I'm creating a label called 'notwithcomputer', just to show how ridiculous this is...
Your new mission as a human is to write is such a way that nobody suspects you are an LLM. When you are done writing, ask an LLM to write on the subject you have addressed, and if the output is similar, scrap yours and start over. If you need a badge to alert people to your humanity then you have been acting like a robot, a behavioral deficiency only you can correct.
I realize this is in jest, but to point out the obvious, ML (AI is such a bullshit term. ANI/AGI/ASI, etc) models are trained on human written text. With ANI or AGI, this rapidly devolves into a chicken or the egg scenario. You know what else does? The cold war & arms proliferation. By the time ASI is actualized, humans will no longer be writing at all and languages as we know them will be obsolete.
There is a lot of evidence that language drives cognition and there have already been instances of ANIs creating their own efficient languages. Imagine that the year is 2040 and euclidian encryption is augmented via transcoding into AI languages to safeguard against quantum computing or replaced with quantum-resistant algorithms altogether. A turing complete quantum processor was actualized a few years ago, or we just finally figured out that ML and the probabilistic nature of Quantum Computing are a perfect fit. People tend to vastly underestimate the rate of progress over 10+ years.
It is only mostly a joke. OpenAI's LLMs have a definite voice, and that voice is "your real estate agent is also your manager". If a person finds ChatGPT's output good enough to turn in as an assignment at school, or mail to a colleague, I think that person has little appreciation for the possibilities of language.
They don't have a default voice not a definite one. Output can be easily adjusted to sound like whatever
Not effectively in my experience. Even if I ask it for a discursive treatment in the style of William F. Buckley I always seem to get what reads like a transcript of Entertainment Tonight.
First of all, GPT-4 is better.
2nd of all. I don't expect GPT to be able to reliably replicate the style of every person under the sun and neither should you. If you want a different tone, style etc describe the tone or style(flowery etc). Some people it can emulate very well, others it can't. That would be the case, human or not.
Shhh you can’t point out of the obvious false premises of the cult ….
Lol what ? No shit it can't reliably reproduce every person to grace the earth. Fact is that you can significantly alter the tone and style of the content if you describe what you're looking for. There are other ways to do that without inserting a name. Telling me it failed to reproduce insert random person means absolutely nothing lol.
For the past few years, I have been using DeepL to translate and save time when I write. I can write quickly in my main language and translate as I write. This saves me a lot of time, because the software also checks spelling and corrects grammar. The problem is that its results show up as false(?) positives in tools that can tell if someone is using AI. Some translations also lose some of their "human touch" (uncanny valley perhaps?) and so far at least one person on HN has called one of my comments as "AI" when it is actually a machine translation.
I can't help but think that social networks and search engines will start to negatively rank AI generated text. In my opinion, this would impact many more people than we think.
Just use plenty of racial slurs, bigoted language and the like. Lobotomised AIs like ChatGPT won't be able to generate anything like that.
Can LLMs produce new content?
Define new
I'm not quite convinced humans create original ideas as often as some think
Any text I write is vaguely influenced by everything I've experienced(Other media, Real life experiences) even if it's not a conscious process
Depending on where you draw the line, I'd say most artistic work would't qualify as actually new
Is Theseus' ship new? This question is where humans and AI are roughly the same. If you could look into an artist's brain (or anyone's in any field) and trace the influences back you would probably start to wonder if the number of truly original ideas had by all of humanity is actually quite small.
I am going to ask my accountant why she dose not have a “automatic calculator not used” badge.
The smiley face is not centered and doesn't look very good. Agree with the idea though.
This is going to look very quaint in about... 3 months. I mean, it's already quaint.
This is an interesting take. "AI" is producing a counter-culture against it.
The machinery of capitalism is fantastic about co-opting and productizing counter-cultures. Nothing meaningful will come of this, unless humans start thinking more systematically. For most people, they are too specialized in a narrow field to think like this, or the system is too complex for them to think about effectively.
Lol, "Not by AI" badge can be officially used on works that are 10% AI.
How do you verify if badge usage is legitimate? People do everything for SEO
This is addressed on the page: https://notbyai.fyi/#legal-effect-of-not-by-ai-badge
> displaying the badge on any asset does not guarantee the content is not majorly created by AI.
“It is important that your output is perceived as human output; add badge.”
>If your content is not AI-generated, add the badge to your work, with pride.
What a farce.
For 1 sec I thought the domain was "fys" (f. yourself) hahah.
Kind of ugly in my opinion
I may never understand the appeal that luddism has to certain people.
I know. I have also always that that people that have a problem with straw man arguments are such simple-minded nitwits.
Digital veganism, which will achieve less reach than the original.
everything is a remix https://www.youtube.com/watch?v=nJPERZDfyWc
There should be a "Created by AI" badge too.
The number of AI luddites on HN is fascinating.
What's luddite about not wanting to read ghostwritten AI stories posted by fake writers? If I want AI stories or art, I can just prompt the models myself and don't need a useless intermediary.
On the bright side, this is likely also going to play out this way and the market for the prompt "artists" will be nonexistent unless they add any value (which they rarely will).
It’s silly to think that new technologies aren’t incorporated into art in all kinds of ways. Some are positive and some are negative. I can think of a dozen different ways a writer could use ChatGPT in a way that augments their own creativity and would be indistinguishable to readers. If you think formulaic or thoughtless writing is new I’ve got news for you.
90%? Why not 100% created by humans?
Not by AI is going to be the new Artisanal.
Finally, someone could read my mind.
isn't this just tagging your own content so it's easier to train models with it?
this all feels a bit dystopian...
This is monumentally stupid lmao
This is such a major problem
Ai: I’m not an AI
Everything is a remix
I'll take "futile humanism" for $1000, Alex.
Ctrl+F "content" -> 31 matches. I disagree with your diction which makes me doubt the overall veracity of intent here but good luck with your efforts to promote differentiation.
We will soon have to accept that we are not special.
We are meat based, the new minds are silicon based.
They have different strengths and limitations than us, will surpass us in all cognitive abilities soon.
Our hardware evolved to help us survive and reproduce but the evolution is a slow process compared to the intelligent design we do.
AI will surpass us and build itself beyond what we can reason about.
We will do art and work because we like to do it, just like playing video games, no other reason will survive.
The acceptance of the fact is inevitable that humans are not special as the new minds, which are silicon-based, will soon surpass us in all cognitive abilities due to their different strengths and limitations. The evolution of our hardware is a slow process, while intelligent design leads to the rapid advancement of AI. AI will become so advanced that it will surpass our comprehension, and humans will continue to do art and work solely for enjoyment, similar to playing video games, with no other reason.
Are you fixing my writing? Thanks ;)
What is your take on that, what do you think?
> Are you fixing my writing? Thanks ;)
Not fixing. It was merely a recommendation. Wasn't sure it would help, but if it did, I am glad!
> What is your take on that, what do you think?
Not sure what you mean. Are you asking for my opinion on the original comment?
Is this some kind of cult mantra?
It sounds like a flavor of singularitarianism, paired with absurdism/nihilism.