Settings

Theme

Ask HN: Does ChatGPT scare you? It scares me

20 points by rcshubhadeep 3 years ago · 37 comments (36 loaded) · 1 min read


Today, with a little effort I could generate a fully working code of a rnn based time series forecasting using this tool. It makes me scared in a few things

1.> What will happen when evil players will get hold of this tech? Are we going to witness another cold-war or something, just this time the threat will be who develops better models?

2.> This incredibly powerful models are built and maintained by private orgs. Isn't that scary in itself?

3.> How do we adapt ourselves so that we can co-exist with such techs (and the more that are yet to come in the near future)

Maybe some more points can be added. What are your thoughts?

kossTKR 3 years ago

I just went from, "not really", to most coding jobs could be gone in about 5 years after just trying it out.

It first made a basic admin interface in Html+Css i could copy paste into Codepen, then made it interactive with pure JS, then refactored it to Vue3, then refactored it into the now obscure Angular 1.4 just to test it, then back to Vue 3, then added Pinia for persistent state in the browser, and then converted the application to PHP Laravel+Livewire, and then to Python with Django+Htmx for backend

EDIT: Wow, then i had it make graph with Canvas showing a sinus wave that i could speed control via an input with the prompt "make a an application in JS and Html that shows a sinus curve that you can speed up or down via an input".

Absolutely mind boggling! It means it can create already create a simple working application and transform it between most known stacks, even older ones.

I have no idea why some people here on HN say that it doesn't understand logic when it can refactor like that?

This is honestly making me reconsider being a developer as a career choice just a little.

  • willio58 3 years ago

    Most coding jobs will not be gone in 5 years. Remember when "automation" was going to take all the trucker's jobs in 5 years? That was about 5 years ago and I'm not seeing many automated trucks on the roads. Of course code is different and some things are being automated, Copilot and similar projects are amazing and the next iterations will likely be mind-blowing. That being said, there's a looooong tail in programming where ML-aided code just won't be the final solution for a long time. (I'm guessing more like 20-30 years from now)

    I also think there will be a long time where a programmer's job turns instead into writing extremely detailed comment-like code where you define nearly every requirement of the code (down to the "this button should be padded 5px from the top right and hover states should function as such..").

    Until we reach the day where a designer could feed a design into an ML alg and out pops the backend, frontend, and infra to do the job and scale perfectly without bugs.. devs will be needed.

    This will follow Gates' law, which says we overestimate the impact of technology in the short-term and underestimate the effect in the long run.

    • alach11 3 years ago

      Yep. We're going to see an explosion of development due to Jevons Paradox. As writing custom code becomes cheaper, more and more niches will open up for custom development work. Every small business will be able to afford to hire someone to make a custom application to suit their exact needs.

      https://en.wikipedia.org/wiki/Jevons_paradox

      • readonthegoapp 3 years ago

        more of a Jevons question, but do you also agree that the increasing use of 'renewable' energy like solar/wind will bring about an explosion in total energy demand/use?

        Then an increase in fossil fuel use due to decreasing cost?

        • alach11 3 years ago

          Interesting question. I think reducing electricity cost (with technology improvements in solar/wind) will definitely increase total energy demand. But that will translate only in a limited capacity to decreased fossil fuel cost. Fossil fuel costs have a ceiling based on how much it costs to get them out of the ground. Prices won't be able to fall (in the long-term) below the marginal cost of production.

    • Madmallard 3 years ago

      Yeah this is way different than "automation". I see the majority of simplified positions being consolidated as a result of this technology.

  • solumunus 3 years ago

    > I have no idea why some people here on HN say that it doesn't understand logic when it can refactor like that?

    Because it literally is not interpreting logic or using logical reasoning, it's not a matter of opinion. The people who made it wouldn't claim that, because that's not what it has been programmed to do.

    It's an incredible example of machine learning, but all it's essentially doing is parsing StackOverflow answers. Everything you just said can be done novices by reading StackOverflow and copy and pasting things together. Yes, it makes that process much quicker, but there's a reason the invention of StackOverflow didn't displace programming jobs:

    If you want to create complex production software and grow and maintain it long term, it's simply not enough to copy and paste from the internet.

    If that's the extent of your ability then your software will turn into complete crap, riddled with technical debt, and will be almost impossible to develop or maintain it. You will be far away from the efficiency required to build a real business from that software.

    If you think your job can be replaced by someone with virtually no programming experience copy and pasting things together from the internet, then yes you should be somewhat worried: get better ASAP. For me personally, that is not a threat to my job. ChatGPT is genuinely impressive and I expect even better things to be released in the coming years, but even so we're a long, long way from building complex software without the need for programmers to be involved.

    The people who are extremely worried about this are probably the same people who believed Elon Musk when he said we would have fully self driving cars 2 years ago.

  • riku_iki 3 years ago

    > I have no idea why some people here on HN say that it doesn't understand logic when it can refactor like that?

    could you try to add some logical/calculation bug in the code and ask it to fix it?..

  • readonthegoapp 3 years ago

    would love to see this as a demo video.

    i'm leaning kotlin at the moment then get up to speed on android dev so i can put together a simple app i don't want to pay someone else to do for me. and i'm bored to death already. if i could have chatgpt do it for me, cool.

    i don't understand how it works - i've finally signed up, logged-in, tried the 'summarize.site' chrome extension for summarizing articles, a couple test questions on the chatgtp site, but can it build me a site and deploy it for me? or is there a ton of setup?

    guess i will find out.

  • l3uwin 3 years ago

    Coding will just change, it won't go away. It will need to be directed and taught just like any other junior dev who knows the theory but has no sense of value-add or larger context.

    We will always have to supervise the machines. Otherwise we wouldn't still be putting Prometheus on 'stable' systems

tsukikage 3 years ago

> Today, with a little effort I could generate a fully working code of a rnn based time series forecasting using this tool.

Interesting. IME even for toy problems it generally spits out code which fails to do what I requested for many inputs or at all. Using this tool to solve a problem requires not only that I understand what it spits out but also that I understand how to actually solve the problem, so that I can work out which parts of the rubbish it spits out are broken and ask it to iterate on those.

It doesn't seem frightening or particularly transformative; I'm not even convinced that using it could save more time than not. It's not doing anything radically different to https://github.com/drathier/stack-overflow-import and the latter works better.

The worst part is that just as with tools like Grammarly, the people who would most benefit from a properly working version of such a tool are exactly the people least well placed to understand when and how the output of the tool is wrong.

I welcome evil players attempting to use it: their evil plans will self-destruct in hilarious ways.

seydor 3 years ago

Maybe it can also answer the question "Why do we have to be such doomers"? Can't we appreciate what an incredible opportunity these systems can be?

1) We are forever in 'cold war', nuclear weapons can destroy the eart multiple times over since 70 years ago. Not sure how a less potent weapon supercedes this fact

2) Those models are successful with computer code because programmers have been very pro-open source and open-data since forever. We 'll have open source versions of those things soon -- we ll still need to find the hardware but i think this can be done too. Much easier to do than a nuclear bomb so i expect these systems to become ubiquitous

I wish biology and medicine have had a similar attitude to open data and open science. Imagine if you could run similar queries in genome databases or neuroscience images.

3. First we get excited, then some people will turn that fun tech to billions as before

I really don't remember people being such doomers when the internet came about. What happened?

  • sdwr 3 years ago

    Thanks, needed that bit of optimism.

    On the code completion side, I'm already feeling the pain of having to interact linearly with Copilot. Would be nice to have a few different panes - "written description", "suggested changes", and "autocomplete", and have them update each other. Just having autocomplete is like peeping through a keyhole.

colingmathews 3 years ago

One big thing I've learned in my tech career is to embrace new tech instead of fearing it. It's kind of like how many people have the experience that their parents don't like their music. A lot of time it's just resistance to a changing world.

Don't get me wrong, I do think there are dangers to runaway progress. But we can't stop the internet now. I think our best bet is opening ourselves up to what comes and help skew attention towards tech that leads to more compassion.

balaji1 3 years ago

> What will happen when evil players will get hold of this tech? Are we going to witness another cold-war or something, just this time the threat will be who develops better models?

My friend wrote a "remote c2c server, basically a mutating malware" using ChatGPT. He had no bad intentions, he just works in the security domain.

People with malicious intentions will be the biggest and earliest adopters of AI. Somehow that is my first thought.

lmarcos 3 years ago

Am I the only one who thinks that tech like ChatGPT and similars will only increase the demand of software engineers?

In the past, software engineers were dealing with punched cards (few engineers), later on they were dealing with assembler (more demand), then with low level programming languages like C (demand starts to increase), and nowadays engineers deal with high level prog. languages like Python, JS, etc. (high demand). As the technology makes software more ubiquitous and reachable to any aspect of life, the demand for people who know about software increases.

Maybe in the future software engineers will have to deal with even higher level languages (prompts?), but that would only mean that making software is easier than before and you'll see software even more ubiquitous than it is right now. Demand will go up for people who know about software even if the tooling seems like it requires less people with such knowledge.

jstx1 3 years ago

> Today, with a little effort I could generate a fully working code of a rnn based time series forecasting using this tool. It makes me scared in a few things

Even before large language models we had the option to copy-paste working code for rnn based time series forecasting for years.

  • 2devnull 3 years ago

    For different definitions of “we” that seems to be true. We is sneaky word. It means one thing one day, and something very different the next.

    • jstx1 3 years ago

      Not sure what you mean by this, "we" here means anyone who could use Google and open a Github link.

hnthrowaway0328 3 years ago

I think considering the computing power major players have, they should be well in the game already. I won't be surprised if they already have something more sophisticated.

solardev 3 years ago

I'm scared politicians will get scared and ban it in its infancy. AI gives me hope for the future, picking up where humans failed. It's humans who scare me way more.

  • serf 3 years ago

    can you imagine a scenario where an AI ban would ever work? all it would serve to do is to tie the shoelaces together of whatever country enacts it while the rest of the world runs laps.

    • solardev 3 years ago

      Like stem cells? The religious folks probably don't like the idea of a robot god as much as I do...

sdfgdfghj 3 years ago

If you are watching any sort of financial YouTube videos, you'd probably notice the amount of fake bot conversations in the comments section, usually trying to promote scam services.

It scares me that in few year I wouldn't be able to tell if something is fake or not most of the time and global mind set will be controlled by even more fake content.

We may need to figure out a way to identify genuine human-generated content.

gardenhedge 3 years ago

I'm scared they'll start charging for it and I will lose access to it

  • throwaway675309 3 years ago

    The only reason that it's free right now is that they're using this as an excuse to gather data.

    It's run by open AI, and like its current GPT models such as da Vinci it absolutely will be monetized.

ActorNightly 3 years ago

Its just another evidence of what I already came to believe once I understood the concept of ML about 6 years ago.

1. There are a good subset of jobs across multiple industries that are simply "decision tree lookup" operations. These types of job will most certainly be replaced. For example, I worked for an aerospace company, we hired a consultant for advising on a manufacturing process. He basically looked at what we are trying to make, and advised on the tooling, process, e.t.c. This is the type of job that can be easily done by a future version ChatGPT that is sufficiently trained on both text and mathematical contexts. Software jobs often fall into above category, replicating common patterns that developers have learned. ChatGPT right now is even smart enough to take an input json and output json and write code to transform one into the other.

2. The actual "compute" operations jobs (like making software that requires figuring out a new pattern of transforming data or interfacing with a new piece of hardware like a 3D display) won't be replaced, but the skill will shift to a lot more computer science centric in being able to either a) additive train generic models on specific tasks, or b) use state of the art AI assisted tools effectively.

3. Overall, quality of life is going to improve, as it will get a lot cheaper to do things.

TLDR; if are a software dev and you haven't already, get super familiar with ML concepts, Pytorch, etc.

https://github.com/karpathy/micrograd is a very good primer to start with once you understand the basic concepts.

johlits 3 years ago

It doesn't deal with people problems (yet) so we'll keep our jobs for now.

cleerline 3 years ago

gave it the below prompt and it nailed the bulk of it.

from the following table create a php crud application : CREATE TABLE `rating` ( `rating_id` int(11) NOT NULL, `review` varchar(4048) DEFAULT NULL, `rating` tinyint(4) NOT NULL )

evilbob93 3 years ago

So far I am skeptical. It can do some things, but it is not beyond bullshitting.

xchip 3 years ago

People wondered the same right after the fire was discovered.

  • 1attice 3 years ago

    'people wondered the same after the fire was discovered' is also what people said after nuclear fission was discovered. And it was as misleading a comparison then as it is today.

    It's worse than that, though. We didn't have nuclear fission being made by multiple competing parties in the private sector with no possibility of government regulation or oversight. That's what we've got today for ML.

    Decentralization sounds great until someone is cooking with plutonium. Our values of freely sharing information and technology are simply unequipped for technologies this dangerous, but we have not yet recognized ML as a weapon of mass destruction. We should, as it plainly can now be used thusly.

    As I predict that in the future, open source ML models will be regarded the same way that loose nukes are treated today.

    Of course, the problem will be far worse. We never set up multiple competing MOOCs to teach-at-scale nuke-making to the next generation. In effect, we have made ML terrorism unavoidable and endemic after 2025 or so.

    Given that democracy was already hanging by a thread, I'm going to wager that the profusion of next-generation ML bots will make it effectively impossible at scale, as it becomes a simple matter to create 10,000 supporters, say, with unique faces, voices, and opinions, out of the thinnest of air.

    This means that democracy will denature into a sort of suspicious populism. And before you say, "isn't that where we're already at," I'll say, "yes and it just got a whole heck of a lot worse."

  • zardo 3 years ago

    Fire has been used by people to do some pretty awful things.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection