GPT 4 coming next week
theinsaneapp.comI'm sure this has been talked about somewhere, so feel free to just leave the link. But in an internet where people just go to one of these chatbots for all their answers/searches, why would anyone continue to post content to the internet? It seems at some point you would just be working for free as a content creator for all of these companies. Nothing you post would be linked to, you would never be cited as the source of the information, and you wouldn't even know if anyone even saw anything you wrote.
I've been thinking the same.
I'm still tempted to post things online for my own reference. And I believe we benefit from increasing the common wealth of information online.
But the fact my own posts are feeding the thing that will cause massive redundances in my industry, ultimately being a detriment to my financial worth, is causing me pause for thought.
Because ChatGPT isn't just a way to share the information I put online. It'll eventually, in a year I'd guess, replace me and my colleagues; it'll be regurgitating fully formed projects, and probably learn to do the ancillory activies.
I believe it's completely delusional to expect the court system will regard "training an AI system and distilling millions of works into a commercial model" as anything other than creating a derived work based on those works, an act that is not covered by any traditional forms of fair use and therefore illegal unless explicitly authorized by each of the millions authors of those works.
We are living through the early Napster years when people were saying with a straight face that "everything has changed now in regards to intelectual property" and "information [specifically, music in mp3 format] wants to be free". In Morgan Freeman's narration, they were about to find that little had, in fact, changed.
Most of the raw material used is available for free with no copyright on the internet, though. I have no desire to charge for my posts that are mainly for my own reference. Nor do the people who are answering on stackoverflow.
But when all those are harvested to cause mass redundances there's a difference. But, at least with those two cases above, essentially reference material and technical answers, I doubt there's a copyright problem.
What could regulators even do? Force the model owners to regurgitate source material references? Force the model owners to pay a small fee pemr post used? Difficult when most of the material is freely available online with ads.
Besides a lot of the material will come from third parties who have scraped the posts. A whole branch of regulation could appear to try to track the source material which has been put online for free. Or even scraped and then put online again sans copyright to be rescraped without the need to pay anyone.
Maybe we can ask ChatGPT to search its databases for its own sources to its own answers. And even check the source is the original. Problem solved I guess... Feed the overbrain and it'll throw you some pennies while it's used to do you out of a job.
There is no need to invent new regulatory methods when the one we have is perfectly sufficient: if somebody can prove your AI model is tainted with unlicensed works, regardless of how you acquired them, then your model as a whole is an infringing work and the affected party can sue you for damages far exceeding the pennies of utility you gained from the action. Your stock prices tanks, your corporate customers cease to purchase your models and you go bankrupt.
It's exactly like the current regime of copyright where I could, in principle, copy paste a file from the Linux kernel and compile it into my binary application, and nobody would know. How much would a single file from a work with tens of thousands of contributors possibly be worth, right? Wrong, it takes a single disgruntled employee (which you are guaranteed to have when you exceed a headcount of roughly 5) to destroy your business and product. The only possible way to avoid this is to train on either public/open sources or get positive authorization for each and every file you slurp for the specific use of AI training, which you definitely won't get for pennies.
As for the inevitable dominance of our AI overbrains fed on open source information, I for one, welcome them. The cat is out of the bag, it's not like we can return to the previous state of affairs. The problem, as always, becomes a political one, how to distribute the fruits of these new technical capabilities to the (human) citizens.
Isn't there a similar problem with Spotify? That's not been solved via political means. The artists are getting shafted. Hard to think the same won't be true for the unpaid feeders of ChatGPT.
I think in the particular case of music the market is saturated by an abundance of human generated content, not machine produced.
Music is one of those human endeavors where it is very desirable to succeed, to the point where many people are willing to do it for free for love of the craft. I can't see the immense profits Spotify allegedly makes by exploiting the artists.
Perhaps we need to accept that the artists are not so much getting shafted, but simply that the age of the superstar is over and most music will be free.
It seems silly to claim that we should all just stop publishing our thoughts simply because some process could come along and combine them with others in order to produce something unique. That is, of course, the story of all human history.
Suppose I write a unique and deeply technical textbook on a subject. Someone could buy that book, read it, become an expert on that subject, and then profit from that expertise. This is all well and good. One of the things that makes ChatGPT different is its ability to scale such that it can provide that expertise to a much larger number of people and at a much lower cost.
> One of the things that makes ChatGPT different is its ability to scale such that it can provide that expertise to a much larger number of people and at a much lower cost.
That, and the fact that it regularly spouts "facts" that are complete nonesense.
I will point out that human Hacker News commenters also commonly spout "facts" that are complete nonsense; in fact, it is us that trained it to do so.
Where is the limit with copyright law then? Is ChatGPT allowed to be trained on books?
If the "terms" of the book ban it, can I just read the book and then write notes which I upload to ChatGPT? (Would a Microsoft employee be paid to take classes/read books for the purpose?)
I don't think we know the answers here yet. Information wants to be free though and we're going to have to reconcile that moving forward!
but what motivation do people have to create something, that will never been seen/read/listened to, and on top of that will just make the people that take it richer while providing no credit back?
you are just working for free so other people can get rich off your work.
i sort of get your point, but in a way that's kind of what blogs/reddit posts, writing samples published anywhere, short of a paid publication, are. its just that GPT does it a lot more efficiently compared to your slow manual brain/process of scavenging the internet yourself for inspiration for your next novel or some creative thing.
There's a lot of sites out there that try to answer questions people are likely to search for specifically to monetise via ads. GPT probably kills them all if it becomes more popular than standard web search and then GPT slowly becomes outdated because people have no reason to publish the information GPT needs.
I've always wished I could have an ad free internet experience, I'd happily pay whatever revenue the ads would have generated.
Would it be terrible is GPT replaced ads with subscriptions? Of course they'd have to split the income with the sites they get the info from. Much how ads on a website make the ad network and the site owner money.
My guess is, if you look at what a subscription would actually cost to cover what ads do right now, it would be a figure so large per person that most people wouldn't pay it.
Why are you assuming that no one will ever see it?
Will the chatbot that gives the answer also give a link to the sources it found this information at? if not then no one will go to your page (via search results) I've only played with chatgpt and it never gives back sources to where it finds the information.
Although someone in the comments above said Bing tries to include links back to sources where available, so that helps.
But how many of those websites depend on traffic? People still publish books because people still buy books.
I bet many people will try to get as much of their work into AI as possible, in order amplify their impact.
what impact? you get no credit for what you do. The only people I would see trying to do that would be special interest groups/religions/cults that want their point of view to be reflected in chat results.
One example I could see is if you have a certain brand/product, you could try making as many forum posts and articles about it as possible in hopes that the AI picks it up as a solution to the user's problem.
Kinda like SEO but for AI. I hope this doesn't happen though, since SEO crap ruined google.
I was wondering when people would start publishing bad code on github along the lines of: this is a bubble sort algorithm, but the last line of the code deletes every file on your hard drive.
then when people are just blindly inserting code they end up running that and nuking their computer.
(or install a rootkit)
Dunno, things are just starting. Ads benefit the ad network and the site. I don't see any reason GPT couldn't do similar.
Your ideas get to spread around. You might want to do that if you care about ideas more than getting credited.
"you would just be working for free as a content creator for all of these companies"
-- This has been true ever since the FB/Twitter era took off
It's worse, at least there's some hope of getting people who enjoy the stuff you make to give you some money in return (despite the increasing tendency of corporate social sites to hide offsite links), if it's been copywashed by a corporate-owned scrape-and-remix machine then there's no chance in hell of someone deciding to support your work.
on FB/Twitter you get paid in popularity (a think some people strive for). with the chatgpt model you get nothing, no one knows you exist.
Also on FB/Twitter/isnta you can build up a following that likes you work, that you can then market/sell to. again with chatgpt you end up with nothing in the end.
I remember this already happening in Usenet times with IMDB, which got bought by Amazon. Or CDDB.
Bing Chat tries to provide citations. I often click on them.
perplexity.ai does provide them and I try to use them over phind or ChatGPT for that reason
Soon, the only "content" will be the SEO optimization junk floating around and it will create more content using Chat GPT 4. It will be turtles all the way down.
All human knowledge is just turtles all the way down. We're just trying to create turtle that shits turtles so we can relax.
ChatGPT can solve a lot of problems but it cannot solve problems that you as a user don’t know questions to, which makes article/blogs superior to it, and will continue to be for a while still.
I have a project where I shared a lot of JavaScript info over the years and those articles are growing 10% month over month. Some get 500 daily views despite there being ChatGPT.
You just can’t trust it, neither can you get it to give you real context or the required visuals.
weird this got voted down, because this is one example where I think having a blog does work over ChatGPT; "problems that you as a user don’t know questions to" Same reasons you would read a book. You need to know enough to ask questions.
Although I don't know how well ChatGPT++ can answer the question: "Give me an introduction to XYZ technology and include examples and questions"
Should we wait for it and ask "what are questions I should ask I don't know about"?
You can try, and once you waste your time for 10 minutes getting nowhere, go ahead and read the blog post that actually tells you what to do.
Ego is certainly something ChatGPT shares with humans.
But I don't know which blog post to read.
You should probably reread once more what I wrote, just saying.
Can you give concrete example to support your argument instead of avoiding conversation?
How is it different from Google search? You can find pretty much anything online, so what's the point of adding to it?
Just because someone could be financially motivated to post content doesn't mean that they have to.
I like to comment on Reddit and HN, and I don't expect to be paid for it (but if you would like to pay me then by all means).
Because you ultimately get credited. Something that the LLMs forget.
If I release all my writing under CC, then they should kredit me if my writing s are a part of the dataset it is trained on, as it in some capacity is a derivative thereof.
The difference would be that Google still links to your webpage, while ChatGPT&Co will just answer the question.
Same reason people still read books. It's less popular than it used to be, but still very relevant.
Also the same reason people still hang out in real life even though online game, chat services, and social media exist.
These are tools, and some people get too into the tools...but at the end of the day there's a time and a place for them.
I feel like this misses the thought I was having. It's more if google gets replaced with chat, and people just ask questions and get answers. The answers come from the content you created, but the people asking don't know that; they'll never visit your page, see what else you did; you don't really exist.
And I do think some formats don't lend themselves to just asking a chatbot, especially if you don't know enough to formulate the question to ask. Book are a good example since you read them to learn things you don't know yet (in theory)
I'm drawing parallels to what you are saying. I'm saying that just because a technology is disruptive doesn't mean it has to completely obliterate the thing that came before it. The internet didn't kill books and social media didn't kill real life interaction.
Also, why is the assumption that AI can't cite sources? Isn't that what Bing already does?
Some may do it as a form of immortality, if these truly are 'foundational' models someone may want to feed as much of their identity and ways of thinking into them by way of flooding online discourse.
So all future models contain some morsel of their being. And the more original the person's ways of thinking, and expressing, the more influence they have on the model, as their thoughts are not as easily compressed or aligned with common embeddings.
I would guess many more people will move to youtube to create content and much of the web hollows out because there's no longer an incentive to write articles on many topics because they no longer get any traffic.
Given that google owns youtube I imagine they would be in an ideal position to extract and use the information in youtubes videos to power their chatbot in a way nobody else can.
It won't be long before these excellent image generators will give way to video generators and YouTube will be flooded with AI-spam.
Youtube has more signals.
You have upvote/downvote. All videos belong to channels which have sub counts. There's also comments.
In other words, it's a lot easier on youtube to tell if a video is likely crap or not and weigh it accordingly.
I would like to be paid for my content but Google broke that early "social contract" of the internet since they started taking a bigger and bigger cut of the advertising pie. Social networks dont even pay anything. Models will simply have to pay us if they want our data, and i think this is a more honest proposition
Maybe eventually all content will be AI generated, with an objective based on "likes" or engagement.
I wrote a blog post yesterday about clean coding. Well, actually I asked ChatGPT to write it for me because I was lazy to start it. Then I wrote the following:
"Well, why would anyone write about anything anymore unless it’s something very specific and unknown. From ChatGPT I see that clean coding is a well established concept. I feel so because I agree with most of the things it says."
So I feel what you are saying. At the same time I think blog posts might evolve to hybrid things where you just talk to an AI and share your thoughts on its output.
You can read the post if you want: https://gurel.kaynak.link/2023/03/09/clean-coding/
I'm not so sure of a concrete answer to that. However, what I think we may see is that the people who publish their non-AI content would do so if their content is so novel (or contradictory to the mainstream) that AI information sources would not yet behave as a substitute. The era where tons of people publish tons of crap online is already in decline, but particularly valuable content may still find a place online.
In an abstract sense, I can imagine someone wanting self help advice may actually not want it from an AI, but a human instead because the domain of that advice would be highly dependent on individual experience or opinion that AI couldn't reliably provide. There would be too much risk in an AI providing purely specious advice that doesn't apply to reality.
For instance, a generic AI available to the public probably will not provide you non-mainstream dietary advice. An AI giving dating advice may ultimately default to reductive "boomer" advice and be unwilling to give controversial advice based on real world experience that may be superior.
Though it may not be forever, humans still have the advantage of individual initiative and experience in the physical world. If anything about your life is extraordinary or if you're radical in any way, which describes a minority of the public, there may still be a place for their sort of content.
I won't claim that it won't happen, but to paint such prophecies you have to have quite a specific idea of the "internet", and I'm not even sure I can clearly imagine what is it for you.
Internet is a communication system. WhatsApp is internet. Will messaging your mom become irrelevant because you can just as ChatGPT? Well, maybe, but I don't see it happen in the near future. Okay, I see, you meant to say "world wide web", stuff you access in your browser, yeah… This doesn't really help either, because you access all sorts of stuff using your browser, it's just a lousy set of wrappers to render whatever there is, including WhatsApp.
So, okay, what do people do on the internet besides WhatsApp? They watch Twitch, for instance. Why? Are they looking for answers there? Surely not, generally it's such a mindbogglingly useless stupid waste of time it's hard to believe people actually watch this shit, yet they do, a lot, and even donate money like they are grateful their useless time is uselessly wasted. Also, it's a well-known fact they aren't even looking for a specific kind of content: if you are a streamer with 10K online you can do basically whatever you want, these people are following you, not whatever it is you did when they joined your channel. So, will people stop watching real people, because there are, well, rendered people? I'd say it's unlikely in the foreseeable future.
For the same reasons it's unlikely that people won't visit 4chan and HN anymore, all sorts of thematic forums and such. Obviously, they won't stop accessing online libraries, because when you want to read Kafka, you want to read Kafka and not a ChatGPT-generated summary of Kafka. Same with watching LoTR (even though it can be completely generated by NN, the movie has to have a name and you want to know that it's the same stuff your friend, "friend" or the favourite twitch-streamers of yours recommends, not some custom-generated movie, tailored specially for you). Same with every blogger, podcaster, youtuber. You may like recommendation systems, but it doesn't really diminish the role of trusted opinions for majority of people so far.
So, what else is there on the internet? Shitty information portals with copywriter-generated articles? Well, ok, now it will be ChatGPT-generated articles. So what? I suppose it may turn out to be actually better than human copywriters. Maybe Wikipedia will be less relevant (but it wouldn't, if it was better structured, and the main (even though it's false) claim of Wikipedia is that it doesn't generate original content anyway).
Surely a lot of things will look quite different 30 years forward. But it's hard to predict how exactly they will look, and I'm pretty sure it won't be whatever you imagine right now.
Does this website get its name from the insanity of disallowing copy/paste & context menus
jQuery('body').bind('cut copy paste', function (e) { e.preventDefault(); });
jQuery("body").on("contextmenu",function(e){ return false; });
Is it possible to block this behavior at the browser level? Other than an addon: https://addons.mozilla.org/en-US/firefox/addon/re-enable-rig...
In Firefox you can set dom.event.contextmenu.enabled and dom.event.clipboardevents.enabled to false in about:config which will natively prevent most of this type of annoyance.
Does this also disable the handy "click to copy to clipboard" buttons that a lot of sites feature?
I set both of these to false, and I was able to copy a command from a README file on GitHub still (after reopening tab)
In Chrome(and the likes):
- open developer tools (CTRL-SHIFT-I, or F12)
- make sure you are on Elements tab
- click on '<body class=...' tag line
- then in the right hand panel switch to Event Listeners tab
- remove anything you like(contextmenu, cut, copy, paste in this case)
I'm sure there is a similar way in Firefox...
I think chrome have shift right click, or double right click to open browser context menu anyways. I currently do not have access to PC to double check
Wow that is really pathetic. There are easy ways around it but the implication is quite ironic: talking about chat bots taking all the credit for everything and then not letting anyone quote them.
That'll surely stop those fancy GPTs from collecting our content.
Pity the competition. Has anyone tried these Ai powered searches? https://www.perplexity.ai/ https://you.com
I'm finding that whatever search is used, it's much the same results with same political leanings so in fact Ai search doesn't offer anything new in that event. I'm a sceptic.
I hope the update Github Copilot to GPT 4, too.
Apologies if this is a little off topic, but I’ve been really excited to try OpenAI’s GPT API and have been locked out for months with no response from their support chat. Any insight into how to get access would be really highly appreciated!
Here’s what happened: I made an account to play around with chat GPT, then wanted to switch to my company email address to use their API on that account. They wouldn’t let me use my phone number to sign up for a second account, so I deleted the first one. Unfortunately, deleting that account didn’t free it so I could sign up again with a new number. I then went and bought a new phone number from Google Voice out of desperation since their support never replied, but they don’t allow voip numbers, so that was in vain. My initial support request was in early January, and both that one and my follow up a couple weeks ago have gone unseen. So it feels as though I’m hard locked out from an API that looks like a lot of fun to use for both personal and professional projects. What is one to do?
Talkatone provides a VOIP service that passes most 2FA checks (including ones that try to filter for VOIP)
I hesitate to talk about it too much lest they get abused to high hell and eventually filtered like everyone else, but at least a few months ago I was able to register an OpenAI account using one of their numbers
>What is one to do?
Buy a burner SIM then change the 2FA to Google Auth?
Oof. It’s a creative solution. But it’s a dangerous strategy for something that I might be using professionally. Getting locked out would be a huge pain then.
I’ll give it a go regardless! Thank you for the idea
People lose access to old phone numbers all the time. I'm sure there's a system in place for this, especially if you have an established account.
I'm super excited for ChatGPT-4. I remember playing around with gpt-2, and distinctly being underwhelmed. GPT-3 was the first time I felt truly excited about generative text AI.
I can't wait to see how GPT-4 is!
How do they keep churning these out this fast? Feels like this kind of technology should take longer to develop, if only through the baby-with-nine-mums-in-one-month adage.
Funding and public interest.
LLMs have been around for a while and they aren't really that different than they were a few years ago tech-wise. The question was always about being able to get good data and compute power for training/running them.
Now that people understand the capabilities of the tech, it's got potential for profit and there's incentive to throw money at it.
OpenAI is treating GPT as a "foundational model". They spend time training the foundational model, then build on top of that. GPT was published may 2020. GPT 3.5 ("text-davinci-003" and "code-davinci-002") shipped a year ago, and ChatGPT was just a fine tuned on top of those.
So they've had plenty of time to increase the training set, improve the architecture and run GPUs full power to get a GPT-4.
GPT-3 came out almost 3 years ago. If anything this has been too slow compared to previous ones.
So...they're releasing 2 separate AI? I don't understand how a text LLM is supposed to generate video.
Are we in the singularity now ?
My thought exactly. Gpt-4 will be asked to create gpt-5, which will create special hardware as prerequisite, etc...