Settings

Theme

Tell HN: I stopped caring about personal development in tech after seeing GPT-4

72 points by dumbaccount123 3 years ago · 176 comments · 1 min read


As a C++/Networking enthusiast religiously following any new features and quirks in the language and also following several rfcs I have lost complete motivation for self improvement in terms of keeping my skills sharp due to the rapid AI takeover. I had an opportunity to see gpt4 in action on an inhouse product and I'm taken aback, it architects, generates tickets and starts writing code using feature / bug / spike branches for an embedded device a company is working on.

It can do almost everything I can do a bit better and I have years and years of domain knowledge, keep ontop of rfc changes, new languages, c++ standards etc, side projects and even occasional leetcode.

Oh well, this gold run did run on long enough - Im glad I made a bit of money from the industry but I think all these students going into CS are in for a rude awakening and we're in for a huge shift in this industry.

mikewarot 3 years ago

I have the opposite take... there's a program I use (and ties me to Windows) called WikidPad[1], it's a personal Wiki, written in Python. The win32 version works great, and because it's compiled (back in 2018), the binary should do so forever. The Linux distribution is source based, and broke when WxWidgets changed the name/nature of some key parameters to its calls.

I'm a Pascal programmer, not a Python programmer... but I'm hoping that I can leverage CoPilot to help me navigate the nitty gritty boilerplate that would otherwise take days to sort through, and get to the heart of the refactoring/patching necessary to get WikidPad up to date and fix the breakage.

I see GPT4 and kin as tools to allow more freedom of action, and less worry about the stuff I always hated anyway, the minutiae of coding.

--

>years of domain knowledge

Usually the term "domain knowledge" applies to real world non-programming knowledge such as chemistry, manufacturing, etc. This is the first time I've seen it applied to programming. Programming is just a means to an end. I've never considered programming to be an industry. We produce a product with zero marginal cost.

I suspect you are in the same emotional place that accountants were, the first time they saw spreadsheets in use. It must have seemed like the end of the world to them, but it wasn't.

>all these students going into CS are in for a rude awakening

As long as they know that computers are a tool, not the end result, they'll be fine

[1] https://wikidpad.sourceforge.net/

  • creshal 3 years ago

    > his is the first time I've seen it applied to programming. Programming is just a means to an end. I've never considered programming to be an industry.

    C++ versus Pascal mindset. :)

    > This is the first time I've seen it applied to programming. Programming is just a means to an end. I've never considered programming to be an industry.

    Yes and no. It did not replace the good accountants who actually orchestrated the whole department, but it got rid of a lot of the low-level grunt work.

    AI-supported coding autopilots seem to go in the same direction: The junior devs whose whole job it is to translate architectural and design specifications into excessively verbose boilerplate code will struggle to survive, but the software architects above them will find a new means through which to express the analytical thinking, process planning and understanding of complex dependencies that they're paid for.

    • LapsangGuzzler 3 years ago

      > The junior devs whose whole job it is to translate architectural and design specifications into excessively verbose boilerplate code will struggle to survive, but the software architects above them will find a new means through which to express the analytical thinking, process planning and understanding of complex dependencies that they're paid for.

      Which will, in turn, further drive shortages of good developer talent long term. The industry thinks there's a tech labor shortage now? Just wait until the ladder has been completely kicked over for young folks trying to find their career footing in tech.

      We were all once juniors, doing work that could primarily/completely be replaced by AI. We had to do that grunt work to learn the lessons that make us architects today.

      • hellisothers 3 years ago

        The entrenched elite will have finally come for software engineering, you’ll only get to do that cool high level work if you go to the right school, with the right degree, with the right certification. And before anybody says “it’s already like that”, it’s really not.

        • the_only_law 3 years ago

          > The entrenched elite will have finally come for software engineering, you’ll only get to do that cool high level work if you go to the right school, with the right degree, with the right certification.

          I figured this was going to happen long before ChatGPT, but sadly I failed to find an good exit, because well… most everything else that doesn’t suck is already like that and the golden handcuffs were too tight.

        • red-iron-pine 3 years ago

          No, it really is. Hang out at a Stanford or MIT career fair.

          The number of non-STEM folks who make it into high-level tech, though certainly greater than zero, is still trivial in comparison to those who go that way early.

    • soco 3 years ago

      I wonder how can the future architect get there, if going through the junior stages is suddenly not an option anymore. Because those all-understanding software architects don't just pop into existence.

      • mikewarot 3 years ago

        There are new accountants being minted all the time, spreadsheets didn't stop that.

        Artists didn't stop because the camera made it possible to do what otherwise took hours with a canvas and oils.

        The nature of programming may change, but it's still about using computers to solve problems.

        --- Putting on the old guy hat ---

        You kids didn't learn programming from magazine articles typing in your favorite little game into a computer that went away when you powered it off.

        You didn't have to toggle in a boot loader before you could use the computer.

        You didn't dream of one day having a modem and being able to call BBSs, then dream of your own phone line that you didn't have to share.

        Yet, ya'll turned out ok, despite all those changes.

        It'll be ok, kid... it'll be OK.

        Computer programs will always be complex beasties with bugs hiding in the corners. There will always be a class of people willing to find those bugs and make things easier to use. You are that type of person, right? Good!

        • LapsangGuzzler 3 years ago

          > Yet, ya'll turned out ok, despite all those changes.

          I personally do very well for myself, being in the top 1% of earners in my generation. It deeply concerns me that I can barely afford to buy a house despite this fact. Most of my peers can't even dream of doing something like buying a house right now. And why is this happening? Because computers are being used to extract vast amounts of value from the system to be captured by a tiny fraction of people who are increasingly owning everything. AI will only accelerate this trend.

          And perhaps I'm one of the lucky ones to have carved out a good career for myself, but everywhere I look, I see folks struggling more and more. And stuff like AI is about to make life a whole lot harder for them. Yuval Harari talked about the rise of the "useless class" [0], and the challenges with mass unemployment in the future when AI has replaced many jobs.

          You can repeat the "it'll be OK" mantra as much as you want, that doesn't make it true.

          0: https://www.youtube.com/watch?v=94o-9zR2bew

        • cudgy 3 years ago

          > You kids didn't learn programming from magazine articles typing in your favorite little game into a computer that went away when you powered it off. You didn't have to toggle in a boot loader before you could use the computer. You didn't dream of one day having a modem and being able to call BBSs, then dream of your own phone line that you didn't have to share. Yet, ya'll turned out ok, despite all those changes.

          This was at the beginning of the software revolution. Growth of the industry was huge and fast. Entrants into the field were virtually guaranteed a decent career with little competition and almost no automation.

          Those days are over. We are dealing with a relatively mature industry now with a new ai tool that is limited by hardware processing power instead of more biological beings. Low level / beginning programming skills will no longer be needed and experienced programmers will evolve into roles that will drive the ai tools at a high level to create solutions, occasionally debug tricky issues that the ai created, and implement code that the ai is unable to generate properly. The last task is the one that will determine the remaining role of the human developer.

          • soco 3 years ago

            I'm not sure how the current (or last year's) trend of "don't go to useless college just take this bootcamp" will fare in face of the AI revolution. because I have the impression those bootcamps only teach the technology du jour, none of which teaching those high-level solutions you are hinting about. So yes I also think there will be a need for computer people, just much less "programmers".

            • cudgy 3 years ago

              Yes, bootcamps are somewhat effective for simpler, structured, higher level software development like web development frameworks, ecommerce integration like Shopify, and IT support, which is what most of them target. However, it is not as effective for other forms of software development like lower level server-side, devices, operating systems, compilers, db admin, and parallel processing or advanced level skills like requirements gathering, project management, and architecture.

              My guess is ai’s will largely replace the benefits of rote memorization learned in these bootcamps and make these skills less marketable to companies.

          • creshal 3 years ago

            We've already seen much of this development pre-AI with the rise of no code / low code platforms. Even if you manage to put this particular genie back into the bottle, there's no escaping the increasing automation of programming.

            And so far it only increased the demand for software, because for anyone outside software development, cheaper software just means you can leverage your budget to get better faster.

            • cudgy 3 years ago

              No code/low code may increase demand for the software frameworks that empower the domain experts, but this does not mean it increases the demand for professional software developers.

              • creshal 3 years ago

                Most of the successful deployments I've seen involved a lot of programmers regardless; either to advise the domain experts, or to customize the solution to the specific needs of the organization, be it with custom components, or interfaces to other systems. There's a lot of untapped demand for software solutions that'll keep programmers busy as total project costs fall due to increased automation.

      • creshal 3 years ago

        Maybe we'll finally invest into a real education and training pipeline, instead of making people figure out everything on the job, like other professions.

        • clbrmbr 3 years ago

          Hard to figure out how to innovate on education (slow to change institutions) in the face of such rapid change.

          I actually feel my university education prepared me well for what’s going on. I studied a mix of theoretical comp science, physics, philosophy, and mathematics (esp calculus and number theory). Very little of what I learned is obsoleted, and especially probability theory and NLP are turning out to be very valuable to know when working with even CoPilot.

          But man, I feel bad for those studying “programming” as a trade rather than computer science as a scientific / engineering discipline.

          • cudgy 3 years ago

            Your background has no domain knowledge though. It is structured and foundational stem skills (except philosophy) which the ai’s can more easily handle.

            Domain knowledge about specific industries and what is required by the software seems the more valuable skillset going forward. How does physics and computer science help with that? Communication skills and ability to learn new, more unstructured domains is likely to be more critical like those learned by strong liberal arts disciplines, for example.

      • LapsangGuzzler 3 years ago

        They can't. Period. It's the next level of prioritizing the short term at the expense of the long term.

    • ilaksh 3 years ago

      Incredibly arrogant, disrespectful, and false. Junior devs who actually write code are beneath you and don't use "analytical thinking" or have to understand processes or dependencies? Ridiculous.

      GPT-4 can do all of that stuff. Probably as well or better than you. If you think you still are better, can you do it literally for 24 hours a day? How about 3-6 months from now, are you going to become 50-500% faster/better at your job?

      I would love to see a few samples of some real world "architecting" that you have done in the last few months along with the calendar schedule you did that on and then compare you head-to-head with GPT4 and a junior dev. And then we show your executives how that went. Sound good?

      • creshal 3 years ago

        > Junior devs who actually write code are beneath you and don't use "analytical thinking" or have to understand processes or dependencies?

        They're hardly given a chance to do so when all they're getting is tickets to turn design X or API spec Y into code with no allowance given to make it better. Yes, that's not the workflow in all orgs, but there's a lot of organisations where most people aren't expected or allowed to be creative until they're far up the foodchain. And those will be the first to lay off what they consider meatbag machines. Think Oracle, Accenture, IBM, etc.

        With regards to your personal attacks, have a nicer day than you seem to have had until now.

        • ilaksh 3 years ago

          They were not personal attacks. Just responding to your arrogance and disrespect towards junior developers.

taylodl 3 years ago

Meh. I thought the same thing when I saw Visual C++ in 1992. It was amazing - it could produce an entire Windows application - a Windows application! - with a few mouse clicks. Add in a little bit of logic and presto! You had a complete Windows application, ready to ship. Many of us thought it was all over for developers. The gold rush was over, we were all going to be out of work in a few short years.

How well did that play out?

It didn't. At all. In fact quite the opposite happened. But I would be lying if I said development wasn't transformed. A lot of menial labor, labor many here on HN have never dealt with, was no longer done.

I see the same thing happening with AI. What you need to be thinking about is what does an "AI app" look like? What does an app accelerated with "AI" look like? How are we going to use this technology to better serve our customer's needs? How will applications be integrated with AI? What even is an application in this new world?

Paradigm shifts are exciting times! Enjoy it!

  • tpoacher 3 years ago

    idk. i take the point, but this also screams of normalcy bias to me.

    just because people didnt become obsolete then doesnt mean that will always be the case.

    especially when obsolence through technology more generally isnt even a rare thing

eloff 3 years ago

History is full of inventions that changed the game for the people in a specific industry. Sometimes they allowed one person managing the machines to replace a team previously doing manual labor. I don’t think that’s the right metaphor here, because programming is not manual labor. The parts of programming that are like manual labor will disappear, you’ll instruct the AI to do it for you. But you need to know what to prompt it with and how to verify its output and plumb it into your code. Speed of writing code is rarely the bottleneck, so speeding that part up doesn’t double your productivity. Still, it helps.

I’ve been programming with the assistance of copilot for a year now, and using ChatGPT as well since 3.5. These tools are amazing, I’m never going back. But to me they’ve only increased the value of the senior engineer with deep domain knowledge. It makes me more productive. But I have to come up with the requirements, correct the AIs code, do all the plumbing that it can’t do efficiently. A lot of the code it comes up with is pure garbage, I have to know enough to know the difference. It does kind of replace a junior developer a little, if you were giving them grunt work tasks, which could be an issue when starting out in this field.

There is so much more that software engineers do than just writing code. That’s really a minority of the time each day for me. If you’re familiar with Amdahl’s law, then you know there’s a mathematical limit to the productivity improvement here.

  • ilaksh 3 years ago

    Have you tried GPT4?

    • spion 3 years ago

      I have. I'm impressed in terms of how incredibly helpful it is, but its clear its far from being a replacement yet.

      One example its ability to do problem-solving when diagnosing a complex issue:

      1. It repeatedly ignores described constraints and offers suggestions that logically cannot be solutions.

      2. When I give it the explanation, it appologizes and... what do you know, two messages later its back to the same suggestion.

      3. When I sigh try the suggestion anyway, then go back to it telling it it didn't work, it will suggest something else, and when that doesn't work either, suggest the first thing again!

      I'm not saying its never going to happen... but its really not there, yet.

    • eloff 3 years ago

      Yes, I’ve been using it at work and on my side project. It’s better, but it’s still just a tool that improves my productivity. It’s not going to replace me.

    • nikau 3 years ago

      I'm holding out for the GPT5 Max Pro.

aristofun 3 years ago

This can only mean 2 things in its essence (either of them or both):

1. You're not far away from beginner level in your C++/Networking enthusiasm 2. The types of products/codebase you deal with was overengineered, outdated, dumb or poorly designed in the first place

Unfortunately there are still piles of piles of garbage code and so called "architecture" being produced every day. That is a low hanging fruit for AI and rightfully so.

nprateem 3 years ago

I wouldn't worry. It's code output is 50% garbage. I'm not in fear of my job. All the hypsters assuming this will destroy lawyers, accountants, doctors etc clearly don't understand correctness isn't something they can just tag on to a prediction machine.

  • astronads 3 years ago

    This mindset just artificially downplays the fact that it will get incrementally better, multiple times, and strikingly quickly. It’s insanely shortsighted IMO. You don’t have to believe AI is the next coming of Jesus to recognize it’s reached a point (or soon will) to be genuinely disruptive. Corporate execs will absolutely and quickly pursue using AI to reduce costs and speed up development. At the expense of employees.

    • nprateem 3 years ago

      See my other comment. I think logical rigour requires another breakthrough.

  • glimshe 3 years ago

    Next version will be 40% garbage. Then 30%, then 20%... And we got to the point where it has as much garbage as human code.

    • nprateem 3 years ago

      That's what you might expect, but it requires another breakthrough IMO. From what I understand of LLMs, they can't be incrementally improved to add logical reasoning since they're just guessing the next word. It's impressive and fine in many cases, but there are many where it's not enough.

      • HDThoreaun 3 years ago

        They already have some form of reasoning though. GPT4 can solve novel problems. It's certainly not incredible at it, but the incremental logical reasoning improvements have already begun.

      • antibasilisk 3 years ago

        They do logically reason, that's the whole point.

        • nprateem 3 years ago

          They claim to. But there's no inherent understanding. Otherwise they could do maths and reason correctly about code. It's just probabilities.

    • mejutoco 3 years ago

      It might be stick on the last 10% garbage, just like self-driving cars. Clearly it is an important change, but for some fields 90% might not be enough.

      Personally, I think we will do more and more complicated things instead of just being done with programming.

    • Ekaros 3 years ago

      What is lower bound of the garbage that we reach let's say in 20 years? Is it sufficiently low that you don't need someone to go over the output carefully? And if it fails that it won't destroy any lives?

    • tome 3 years ago

      How do you know that?

  • crop_rotation 3 years ago

    Can you show me sample prompts where GPT4 gives garbage? I am yet to find one.

    • yodon 3 years ago

      >Can you show me sample prompts where GPT4 gives garbage? I am yet to find one.

      Agreed. GPT-3 and GPT-3.5 commonly hallucinate. GPT-4 can certainly be made to behave badly, but on real questions I've put to GPT-4 it has a 0% hallucination rate. The few wrong answers it has given have been "sensibly wrong" in that it's highly likely an experienced human programmer would have made the same mistake (eg lots of Stack Overflow answers are wrong in the same way), and even its wrong answers have been helpful in guiding me towards the correct solution.

      These occasional, "sensibly wrong" GPT-4 answers are fundamentally different from the correctly formatted academic bibliography citations for technical papers that never existed, by authors that never existed, in journals that never existed hallucinated "answers" I've received from GPT-3 and GPT-3.5.

      • nprateem 3 years ago

        I mean here's another example from right now regarding Terraform:

        > Me: how to only run data "archive_file" if a path exists?

        > GPT4: <blah blah blah> add: depends_on = [fileexists("/path/to/file")]

        This is nonsense. Terraform tells me:

        > A single static variable reference is required: only attribute access and indexing with constant keys. No calculations, function calls, template expressions, etc are allowed here

        I just get this rubbish all too often to be afraid for my job.

      • graboid 3 years ago

        My experience has been different. It very often hallucinates variables or function identifiers for me. I never witnessed it doing that for code on the first output. But once the chat/context grows larger and I already asked for modifications to the posted code, it happens quite often.

        A non-code example: Some days ago I asked it about "Searle's Wall" [0]. It gave me a mashup of the correct description and the Chinese Room experiment. So it clearly had the right answer somewhere in its data, but it mixed it up with the much more famous thought experiment.

        [0]: https://www.researchgate.net/publication/260138925_Searle's_...

    • nprateem 3 years ago

      I was trying to create an AWS IAM policy restricted to assumed roles. I didn't know you can't use assumed roles in principalArn conditions blocks but must refer to the IAM role instead. Gpt4 happily wasted an hour by shuffling the conditions around etc instead of telling me this. Sometimes it's policies were even malformed, and in all cases they didn't work.

      • ilaksh 3 years ago

        Did you try giving GPT4 the relevant documentation akong with your query?

        • nprateem 3 years ago

          Without wanting to sound arsey, why should I have to? If I'd had the relevant docs to hand I wouldn't have needed it.

          This is the problem IMO. The model needs to somehow learn that out of its entire training set, the single sentence in the AWS docs saying not to use the assumed role ARN takes precedence over any patterns it may have learnt elsewhere in this specific situation.

          • yodon 3 years ago

            What you are describing is very different from the hallucination behavior of GPT-3 and GPT-3.5.

            Yes, GPT-4 came up with an incorrect answer, but it's an incorrect answer an experienced programmer could legitimately have come up with, and one they probably would have come up with before actually testing their code against the AWS endpoints. GPT-4 sometimes gets hard questions wrong. GPT-3 and GPT-3.5 make up nonsense.

            If a coworker told you GPT-4's answer, you'd say they were wrong but you wouldn't say they were hallucinating. If a co-worker gave you GPT-3 or GPT-3.5's answer you'd definitely doubt their sanity.

            • nprateem 3 years ago

              Yeah sure. But the OP is in fear of their job. I think there's a fair way to go until we're out of work. Wrong is still wrong.

    • scotty79 3 years ago

      Ask it to show you an example of how to use nom parser in Rust and try to compile and run the example.

      Try a few times and ask for more complex example.

    • is_true 3 years ago

      For example I have tried it at a really simple task and it failed. It cannot generate correct CSS selectors, it makes sense as it doesn't understand specificity (as most humans, haha)

jstx1 3 years ago

Maybe. It currently can't do everything so the comparison isn't person vs LLM, it's person who uses LLM vs person who doesn't. And the people who do are much more productive. So by giving up, you're making yourself obsolete faster.

Will we ever get to the point where it does everything independently? No idea. But right now your reaction is premature.

lasagna_coder 3 years ago

Faster code writing still requires people who know what the code should do, what it should look like, etc, like you if it's true what you say you're capable of. It sounds like you sharpened the right skills: how the code should work, and didn't waste time, ie. practicing being faster with a keyboard. Now you're more a developer, and less a typist, that's all. We still use engineers to design bridges, despite much of the actual building being done by machinery, process, and unskilled labor, because we don't trust a cement mixer to tell us how the bridge should function.

  • loudmax 3 years ago

    I think the next phase will be to train GPT-type AIs to forego the compiler entirely and just write binary rather than C or code that's designed to be understood by humans. You should be able to point the AI to a binary executable file and tell it to produce a version with a slightly different behavior. Point the AI to say, /usr/bin/chromium, and tell it you'd like a version that displays tabs at the bottom of the page instead of the top. Or port it to RISC-V, or optimize for speed rather than memory use.

    Obviously, ChatGPT can't do this today, but I don't see any fundamental reason AI won't be able to do this in the decade or so.

    Even in this context I don't know that there won't be demand for humans that have the mental rigor to program computers. I do think we will need to adapt.

    • endigma 3 years ago

      In compiled code? Just moving assembly around to move tabs from one part of the screen to another? This seems nonsensical, you'd have to have your magic AI bullshit not only understand the abstractions at play in, say, chromium, i.e. skia and xorg or wayland or whatever, but also be able to surgically affect not the code from skia, but specific instructions that (dynamically) position elements.

      This abstraction, compiled away, into machine code that does one thing. Compilers also tend to take all kinds of shortcuts to make programs a lot faster, and therefore changing them might not be as straightforward as you think.

      If you think I'm wrong, I'd like to see what "fundamental reasons" you've considered and how you've reasoned that they aren't an issue for this sort of system.

      • loudmax 3 years ago

        Not assembly. Machine code. The ones and zeroes.

        Abstractions like assembly code or C or Python are for the benefit of us humans. We can't reason about even fairly trivial x86 binaries because our minds aren't designed for that much complexity. An AI will have a completely different set of limitations.

        This isn't going to happen tomorrow, but you'd be foolish to bet against it happening within the next decade. This advanced, but it isn't any more "magic bullshit" than ChatGPT or Stable Diffusion.

    • lasagna_coder 3 years ago

      What reasoning is this based on? I haven't seen any research into this area. My understanding would be that it would be more efficient to produce its own language-like abstractions and compilers to assembly/C, which would be faster for it to write with as well.

  • jillesvangurp 3 years ago

    Exactly. There is also the notion that making certain things easier/cheaper, will simply raise the ambition level for everyone. And you get to spend more time directing what the best outcome should look like rather than building it very slowly over a long period of time. It will be the kind of stuff you'd previously thought would be beyond your skills to do. But now you can.

    People expecting that their jobs never change would have a rough time surviving for decades in IT anyway. Automate all the drudgery away, new drudgery always comes back in its place.

  • cudgy 3 years ago

    Most developers trust and use libraries that they do not understand or bother perusing, so most will likely accept the ai generated code for the most part without much inspection too.

  • michlim 3 years ago

    I agree with this take. Moreover, it takes a seasoned software architect to verify that the code generated is correct, clean, and idiomatic. Someone still needs to shepherd the machine.

magicalhippo 3 years ago

My day job involves working on a B2B CRUD-ish application, which is a market leader in a certain niche.

Almost all of the difficult problems I face when developing software is not from the code writing part.

It's things like deciphering what the customer needs, it's picking a good architecture for a new module or selecting an appropriate algorithm depending on various trade-offs, and figuring out how to integrate new features into existing in-production code with minimal disruptions.

While there certainly are exceptions, and if you're in a C++ shop chances are you're one of them, I'm pretty sure my job is quite typical in that sense.

As such, I feel my job disproportionately involves writing code. Some is superfluous, due to tool limitations. Some is not, it encodes intent and restrictions, but might still be trivial to write.

So I've long been wanting to write less code, to focus more on the difficult parts.

  • mason55 3 years ago

    > It's things like deciphering what the customer needs, it's picking a good architecture for a new module or selecting an appropriate algorithm depending on various trade-offs, and figuring out how to integrate new features into existing in-production code with minimal disruptions.

    Such a good description/summary. I’d love to use ChatGPT but the problems I solve on a day to day basis are almost never of the form “write some simple code to solve this well defined problem”

John23832 3 years ago

These GPT programs literally don't know what the code does (or the meaning of their output in general). They are Large Language Models. They just know (roughly) how to make responses in a particular language that make sense (english, C++, wingdings). They do not understand the "sense that is made" however.

This requires subject matter experts, like yourself, to use and implement.

These LLM are tools. They are not sentient.

  • crop_rotation 3 years ago

    Whether the model "knows" something or not is irrelevant. It makes expert skills much less valuable. A big boon for individual productivity if you are not an expert in some field. What worries experts like the OP is their skills become much much less important. If you are an expert in some area and suddenly everyone else catches up almost, it will surely impact your wages.

    • John23832 3 years ago

      The OP spoke about personal development, not wage and market value. I would advise everyone to "work to live" not "live to work".

      > If you are an expert in some area and suddenly everyone else catches up almost, it will surely impact your wages.

      Find something that compels you and don't be a wage slave. That used to be what programmers did.

      • crop_rotation 3 years ago

        Being entertained compels a lot of people. You can't make a living doing it. Programmers had the luxury to talk of passion projects because you could find something you liked and still made good money (Again talking of the average programmer, quoting one extreme side would be neither here nor there). People don't work in warehouses because if compels them.

        If all the things that compel you don't make money, you should introspect instead of thinking about the glory days of what programmers did.

        • John23832 3 years ago

          There is a large gulf between “LLM’s making my skills less valuable” (which I don’t even think is the case) and being jobless, not able to make a living.

          The OP is not going to be jobless anytime soon if they have the skills that they say they do. Hell. Someone with less knowledge who just leans on a LLM isn’t as capable. HUMAN experience is valuable.

          • crop_rotation 3 years ago

            The OP may not become jobless, but would lose motivation to learn more (same as a recent post on HN about a designer demotivated due to their work now centred around midjourney).

            As an analogy, people in robot (I use the term in a loose way for machines) assisted warehouse find the work far worse than one without robots, because the job becomes soulless and centred around the robots, making it much less fulfilling.

            • John23832 3 years ago

              You've moved the goalpost multiple times. Are you concerned about the OP's ability to command top dollar for their current talents or that they have a job at all (you've expressed both), or the OP's feeling of fulfillment/motivation in what they do?

              I go back to what I said before. If your fulfillment in life is centered around work, you need to re-evaluate. If the OP has no motivation to improve themselves for the sake of improving themselves, that's an OP problem that existed WAY before LLMs. Whether it's weightlifting, programing, or basketweaving, the OP needs to find some benchmarks for life that come from self motivation.

              That being said, the OP will continue to be employed. Rather than miring in the existential crisis of "not being able to continue operating in the manner they do today", the OP needs to adapt and align their skills with this new tool. LLMs are tools.

  • jmuguy 3 years ago

    People that post all this sky is falling nonsense about LLMs seem to be working in some alternate reality where all developers do is produce simple code in a vacuum without any external inputs, needing to maintain it, expand it, etc.

  • jon889 3 years ago

    They don't have to be sentient. It doesn't need to be better at creating software than humans. It needs to be cheap and good enough. Most apps and websites are pretty similar, eg, blogs, storefronts. We have whitelabel apps and websites already, they're about to become a lot more customisable by a lot more people.

    It's the same as IKEA, it's not as good quality as a handcrafted table from the 1800s. But it works well for most situations and most people.

    • John23832 3 years ago

      The OP was describing their work on network code, not whitelabel websites. The work of making whitelabel websites was already hollowed out by firms in cheap markets (India, SE Asia).

      A calculator didn't make mathematicians obsolete. It aided in the creation of more complex mathematicians.

      • ilaksh 3 years ago

        Sure but it did make calculators and computers obsolete.

        My mom was a computer in a bank. That was the name of her job.

        • John23832 3 years ago

          Does your mom lament the fact that she is no longer a computer? Has she been jobless since?

          Society evolves just at the available jobs do, due to technology.

  • baq 3 years ago

    you're right, those things aren't sentient. but, they sure aren't markov chains, either. dismissing them as bags of numbers without agency is extremely shortsighted.

    • John23832 3 years ago

      Strawman. I didn't dismiss anything as a bag of numbers. I described the scope of understanding of these models and what it would still require for these outputs to be usable in the real world.

      However, as far as agency, these language models have none.

crop_rotation 3 years ago

I have had similar reaction as you. It makes a lot of skills obsolete. I myself have been pondering the implications on the wider society. And I have found it to be great for almost all the problems I threw at it.

To those saying it will enable them to solve more problems, yes that is correct. It will give everyone "wings", but once everyone has wings the industry will be so different in terms of wage and employment.

To people saying GPT gives incorrect code, please try GPT4.

If your age and circumstance allow, you should think whether a career change is possible. Not a hard change right now, but atleast explore what options might be available. I am exploring the same myself.

To those talking of chess, that is not a correct comparison since people want to watch (and connect with) human players playing chess (thus the pro scene survives), and play it for their own joy. Due to tools like stockfish, it has become far easier for people to explore moves. If the aim in chess was to finish more and more games from random given positions, and people were paid per game (and some value was created finishing it), stockfish would easily drive it to 0. Chess survives not because humans do better than AI, but because nobody is interested in playing against AI or watching Stockfish v Stockfish (By nobody I mean a very small number). Most people want to play against real people and watch real people play.

  • the_only_law 3 years ago

    > If your age and circumstance allow, you should think whether a career change is possible. Not a hard change right now, but atleast explore what options might be available. I am exploring the same myself.

    I’ve been doing this for years, and I have the least confidence I’ve ever had that I’ll ever pull it off. Credentials are expensive as hell and time consuming to obtain, and if massive wage loss is a concern, there aren’t many careers you’ll be safe starting over in. Anything “interesting”, but low paying will likely be affected in the same way software is.

  • ilaksh 3 years ago

    I don't think employment is a good option at this point. It's more about producing something valuable (or just popular) by leveraging the AI tools.

  • Donald 3 years ago

    What careers are you thinking of?

    • okBroOkBroOkBro 3 years ago

      Being an HN pundit obviously, posting the spiciest of hot takes

      user: crop_rotation

      created: 4 days ago

      karma: 305

    • crop_rotation 3 years ago

      Government jobs for instance (if you live in a country where that is an option). Medical profession would also be immune for some time but med school is too hard of a switch in terms of time.

      • astronads 3 years ago

        What makes medical profession immune? I see this area as ripe for AI disruption. Analyzing symptoms and providing suggested medicine, tailoring dosage based on your individual vitals, analyzing data/imagery to diagnose you with diseases, all kinds of automation and speed ups for processing patient data, etc.

        • crop_rotation 3 years ago

          The same thing that has prevented self driving cars from taking the roads, i.e. that errors can be fatal. That and doctors are better unionised and won't easily let a machine practice medicine. There are all kinds of laws which necessitate the presence of a licensed medical professional in various steps of treatment. Software has neither of these cushions.

querez 3 years ago

As an AI researcher: don't forget that we're at the peak of the hype-trend: a lot of what LLMs can do looks extremely impressive, but most of it falls apart on more detailed inspection. Unless you ask about mundane things, they will tend to get a lot of tiny (and not so tiny) details wrong. At the very best, these systems will allow us to automate some boiler plate things, but they're not good enough to do complicated stuff without lots of supervision. And what's even more important: I think a lot of people look at the current impressive steps and think "oh wow, if it continues at this pace we'll have the Singularity by autumn". But the thing is: we're not able to keep this pace of progress. What you're seeing now is as good as it gets (in terms of big breakthroughs, there will still be lots of small and medium ones). The next few months (and years?) will see a ton of incremental improvements and many, many, many people trying to apply these new technologies. But I personally (as a decently successful AI researcher who's been part of these developments for over 10 years) don't see a way forward to keep making many of the big strides we've been making. As an analogy: we're having a Bitcoin moment: imagine the first blockchain was just released for the first time: now, there'll be lots of people trying to understand the tech, come up with their own variants, make some (fundamental?) improvements. But the actual fundamental tech/idea is out now, and it's not really going to change much.

TL;DR: I think your job is safe.

  • crop_rotation 3 years ago

    There is no comparison with bitcoin here. Bitcoin is still struggling to find a single legitimate use case, while GPT4 is already proving so useful to so many people (including me).

    > What you're seeing now is as good as it gets (in terms of big breakthroughs, there will still be lots of small and medium ones).

    Unless you have a strong source for this, I find it hard to believe. Also, GPT3 didn't have many big breakthroughs over GPT2, other than massive parameter size.

    • querez 3 years ago

      My comparison was merely in terms of how the technology evolves, not it's applications. Sure, Ether and zero-trust coins seem to add a lot, but from a 10.000 feet it's pretty much more of the same. Also, agreed that GPT1->4 didn't involve any breakthroughs. In my eyes, there's not much diff between them.

      > Unless you have a strong source for this, I find it hard to believe

      Like I said, this is just my opinion. I do have a world-expert level understanding of the technology, but at the same time I'm a strong believer that even experts are bad at predicting the future, so make of this what you will. Also, my impression of what constitutes a breakthrough and what doesn't might vary a lot from yours.

huijzer 3 years ago

For what it’s worth, I’ve been trying to verify the claim from Microsoft and OpenAI that we would get more software in the future when the price of software would decrease. In other words, they claim that there is a supply problem for software and not a demand problem.

So far, I find it a pretty extreme statement, but it appears true here in the Netherlands. Most employees that I talk to at various industries can immediately point out one or two things which they would like to see automated. In most cases, software would replace data transfers which now occur via spreadsheets or paper.

  • creshal 3 years ago

    There's already a whole industry of no-/low-code platforms that try to solve that very problem, Airtable etc. And a lot of companies have been very successful adopting them… or the last half dozen waves of digitalization.

    But in my experience, there's some hard to solve bottlenecks for those sorts of organizations that still insist on largely paper-based workflows:

    1. Their workflows are such a byzantine mess that either nobody understands them sufficiently to explain them (be it to a programmer, or to a no-code platform, or to an AI), or they're fundamentally broken and only fudged along by people who shouldn't be allowed to do their job, if the process was implemented "correctly"… or both.

    2. It takes a special breed of people who have the necessary analytical skills to really pull off architecting automated workflows that work in practice. It comes natural to some managers and consultants, a lot of programmers, and a bunch of others, but it's not a widespread skill, and those individuals know their worth. People who already can't conceptualize the underlying problems won't be able to do so with the help of AI any time soon.

    3. It still takes budget to implement such workflows, AI or not, and the affected orgs usually don't want to spend any money on improving themselves.

d00mer 3 years ago

> It can do almost everything I can do a bit better.

Interesting. I agree it is a relatively decent boilerplate generator, but it is quite useless for anything else I've tried. How did you measure its performance?

muzani 3 years ago

AI might be the new gold rush but the industry still needs metallurgists and jewelers (those on the application/plugin layer) and the gold rush needs shovels (cloud and hardware).

Kalanos 3 years ago

I felt a similar way about video games after learning about databases. Every action I did in the game was less real/meaningful because it was a just a db transaction.

  • febeling 3 years ago

    This resonates. There was a moment in history when, all of a sudden, everything was expected to be built on top of some framework, instead of solving it yourself. 2005-2007 timeframe. Felt disempowering, I guess that's the word. ChatGPT has that same quality.

markus_zhang 3 years ago

I have similar feeling. What GPT brings is uncertainty. We don't know what it can do in next version. But in the mean time, it can generate infra as code if you feed info, it can analyze at least some DOS virus on the fly, it can at least be a function filler. Yes it never does anything perfectly BUT humans don't either. And that uncertainty makes one feel that in one day, and that day is not 50 years apart, but maybe 5 months apart, that it can surpass 80% of programmers in those tasks.

And yes, programming has a lot more tasks but essentially they are common in one or another. I don't know. My job is obviously in danger right now and I don't see an easy way out. What am I going to do? Shift to another junior role that GOT may take over this version or next version? Or be a product guy or a marketing guy that I HATE and AVOID to be for my life? And how are those guys safe? Maybe I should go back to school amd study general relativity -- at least AI is pretty weak in abstract math and physics. I don't see a way that we cam be sure that is diagonal to what AI is capable of. The best thing I, no, you can say is, OK AI might be able to take 80% of my job away but my company still needs me to modify the code.

But what fun is in that? If AI can do say 50% of the task in a split of second, why on earth would your employer EVER pay you to initate a piece of code? It will pay you to debug and give it more prompts, but is it what you want to do?

But I'm probably paranoid. We will all be fine. After all every technological advance added jobs, right? We simply need to adapt then everything will be fine.

And you know what? I thought about something funny and almost LMAO -- all we programming guys, we have been working so hard to automate ourselves away. But schools, hospitals, governments and pretty much anything else that we think are as slow as dinosaurs will stay as dinosaurs.

Balgair 3 years ago

I'm of the mind that gpt4 has just made you even better than before. Combine your skills with it, make yourself a consulting powerhouse. Tutor and teach others at a pace heretofore unknown. Your future may have just gotten a lot brighter!

nextlevelwizard 3 years ago

If you stop now you will get replaced by gpt. It will be years if not decades before gpt will replace the last programmer, but it won't be long before it will replace the bad programmers.

doubtfuluser 3 years ago

I think it’s going to be a great enabler. A lot of people who now work for companies will be able to make use of the new tools to build their own companies or products. Suddenly one experienced developer can move more quickly and even get output in parts they can read but not write fluently (some backend person being enabled to create Frontend stuff now). Coupled with the marketing language spit out and graphics being generated by AI as well, I think we are going to see an explosion of 1-person-startups soon.

gmt2027 3 years ago

In light of the quote, "Civilization advances by extending the number of important operations which we can perform without thinking about them," I agree that LLMs will undoubtedly transform the landscape of work. By automating tedious tasks, they'll enable engineers to become vastly more productive (10-100x), allowing them to focus on strategic and creative aspects of their projects while developing larger, more complex systems.

While job losses are a concern, I think the more significant impact will be on the way companies operate. As firms exist to economize on the cost of coordinating economic activity, the streamlining and reduction of coordination needs brought about by LLMs will challenge the very foundations of many businesses. In this new landscape, individuals and small teams might outcompete larger organizations.

Freelancers and solo entrepreneurs could find themselves better positioned to compete in the market, driving the rise of smaller, agile businesses that can innovate rapidly and cater to niche markets. This shift will also change the skills needed for success in the field.

Overall, it's an exciting time to be part of this industry. Far from being a time to quit, it's an opportunity to adapt, grow, and harness the power of LLMs to reshape the world of work.

  • jwestbury 3 years ago

    > they'll enable engineers to become vastly more productive (10-100x)

    Will that come along with a 10-100x pay increase?

    Why should I care about being so much more productive, when it won't come with any pay increase (and will likely come with a pay decrease)?

    • kbelder 3 years ago

      A general 100x productivity increase, if there's no change in pay, will result in massive price decreases and increases in selection and variety.

    • gmt2027 3 years ago

      I'd say stop thinking in terms of employment. When you are 100x more productive, you won't need to be a cog in someone else's machine to build products end to end. There'll be less value for the founder types who assemble capital, teams, and middle management to coordinate them. Think of the professions and trades where people run their own practice: doctors, lawyers, accountants, plumbers and electricians. Once sufficiently skilled, they can strike off on their own and deliver value directly to customers. Sometimes with the help of apprentices and trainees. There are a lot of problems to solve today where the cost of software is too high to be practical.

      When our tools get so good that our employers don't need us, we won't need them. In a market where the engineers are competing with the manager class whose core skill is politics and bullshit, who would you bet on long term?

RivieraKid 3 years ago

I'm definitely very worried mainly because I still have several years until even "lean" financial independence.

Thinking about switching to a better paid job and / or starting a side-project that would be able to generate semi-passive income.

I'm surprised the majority of devs don't see this as a threat. Check out r/cscareerquestions or r/programming, the general mood is people ridiculing the prospects of AI having impact on jobs / wages.

  • crop_rotation 3 years ago

    Other than "ridiculing the prospects of AI having impact on jobs / wages", what option do most programmers have. Most people do not have an option of a job change, or to cope with the new reality of the end of the high compensation age (soon, doesn't have to be in 2023). Accepting that the AI will soon(not exactly right now) restructure the industry doesn't help. Hence most people like to pretend it will just make some small impact and focus on living their life.

  • tome 3 years ago

    > Thinking about switching to a better paid job and / or starting a side-project that would be able to generate semi-passive income.

    Are you sure that's possible? If so, why haven't you done so already? Were you deliberately earning less money than you could?

    • RivieraKid 3 years ago

      Yes, I'm basically afraid of change, I quite like the current work environment, wage is the only downside.

frankie_t 3 years ago

I've had similar thoughts, this development is completely disenheartening to me. I love computer science, self studying and the programming-as-solving-a-puzzle kind of occupation.

It looks like the most interesting part of programming (for me) has been automated, while the parts I hate remain (at least for now?): gathering requirements, talking to people, understanding business, etc.

I hoped to earn enough money through commercial programming to live off it, and switch to programming languages/compilers and work in that area for small money but big fun.

It seems neither of these things are going to happen, and for technology/logic aligned people that are mediocre in their performance, and don't like working with people, the only place to go is trades. Maybe I'll still have a "programming" job, as an intermediary between AI and product people, but I feel the competition is not gonna be in my favor.

If AI truly replaces all creative work, maybe a good way to go would be acknowledging your inferiority towards a superior species, buy some land in the country and try living a quiet life off the farming?

  • obsoletedbygpt 3 years ago

    > buy some land in the country and try living a quiet life off the farming?

    I've seen repeated suggestions to this effect. Meanwhile, housing prices in the US are 40% above what they were per-pandemic.

Kaotique 3 years ago

If you are writing code as if you are a prompt yourself: specs in, code out. Then yes, I would get worried.

But most developers do and know much more than that. They have domain knowledge, understand the relation between different systems and understand the codebase as a whole, not just a specific file or function that does one specific thing.

Don't give up. Try to use this new tool to improve your knowledge and use it to you advantage.

secondcoming 3 years ago

I feel your anguish, but I think you're being defeatist.

There will still be a source of income in fixing the AI generated code. It won't be as fun though.

  • pfortuny 3 years ago

    Maintenance maintenance maintenance… Software needs to evolve and be fixed. Bugs appearing after the fact are not something new afaik.

    And maintenance by the way.

alpaca128 3 years ago

I haven't yet made up my mind about this; on one hand the current state is clearly not good enough, but considering the insane recent progress I'm certain it will fundamentally change parts of my job. I just hope I can use it more for debugging and fixing dependencies, that's a more interesting application imho than letting it write code and then manually check the code to make sure it's reasonable.

Meanwhile Stable Diffusion managed to motivate me more than anything to learn drawing. I always gave up in the past because it takes so much practice to get good results. Now I can draw something, throw it into Stable Diffusion as input (the only way to semi-reliably get what I want) and get a more satisfying result, and it's still bad/inconsistent enough that I'm motivated to do it better.

andy_ppp 3 years ago

AlphaZero ended human interest in Chess too didn't it.

  • CalRobert 3 years ago

    If I were being paid to win chess matches I would certainly worry.

    • gundamdoubleO 3 years ago

      Why? Chess is more popular now than it has ever been and more people care about the human storylines and drama behind the players than the actual quality of their play, for better or for worse.

      • CalRobert 3 years ago

        Similarly, some people take great pleasure in the intricate workings of Japanese joinery, but if I were a run of the mill furniture builder ~150 years ago I'd be greatly worried about the consequences of machinery.

    • andy_ppp 3 years ago

      I don't think the people streaming Chess games to millions are particularly worried.

jzellis 3 years ago

I used 3 for the first time to spit out some boilerplate for me and I was fairly impressed. Maybe the next one can solve complex problems I have to think seriously about, like architecture decisions.

But I never started coding to write code - I started coding to solve problems and make things, and I've been doing it for 40 years and 25 professionally. Code is the medium for me, not the message. I always thought that made me a bad coder, but maybe in this new era it puts me ahead of the game somehow. I dunno.

culopatin 3 years ago

It just clicked to me that I can use this to speed up how much data I understand and intake. Instead of googling and stumbling through poorly written tutorials, or good ones that are just hard to understand, I can use gpt to get me to the answer faster.

Personal development should be about what you like to do. I like solving problems and building the solution myself, for which I need to learn new things, which is also enjoyable.

There were always other c++ devs out there, why did you choose to do it anyway?

arbuge 3 years ago

A bit of an aside here, but I signed up for ChatGPT+ yesterday to access GPT4 and it's extremely slow, to the point of being almost unusable for me.

Is this everyone's experience?

obsoletedbygpt 3 years ago

This might sound alarmist, but I would strongly advise you and anyone else reading this to try to make as much money as you can NOW, while you still can, and try to find investments that are AI-resistant. For example, I've been investing heavily in residential real estate, including mobile/modular homes. Do not count on Sam Altman or other tech bro billionaires coming to your or anyone else's rescue with UBI.

spion 3 years ago

Interesting how this story got immediately pushed away from the first 200 items on the front page (where we have items 2 days old with 50 poins and 50 comments).

  • the_only_law 3 years ago

    I think it’s because threads that quickly accumulate more comments than upvotes are downranked.

Ekaros 3 years ago

Funny thought, we all talk about Stack Overflow being replaced by these models...

But with the new stuff, who will write the answers for them to learn first from? Or will we be stuck with this level of knowledge forever now? In the case that no one will produced sufficient training data anymore to be fed in to them...

Maybe the future isn't so bleak. Or we just need to find suitable niches which won't be filled. Or where someone wants a human in loop.

deterministic 3 years ago

I am not worried at all about loosing my job to an “AI”. We already have ways to automate most of our software developer jobs: libraries, frameworks, code generators, compilers etc. And yet we are still needed to make all of those components work together to solve a business problem. So called “AI” is just another automation tools in our arsenal. Nothing more. Nothing less.

scotty79 3 years ago

Check back in few months when subtle bugs introduced by AI will start to surface. There'll be plenty of work and money for you then.

  • loudmax 3 years ago

    That could well be. But also check back in a few years when the AI is chasing down subtle bugs introduced by humans.

    • scotty79 3 years ago

      For now AI has nearly zero debugging skills. Which I'm not surprised by because debugging requires reasoning, often over large parts of the system.

      • elijahlucian 3 years ago

        where do you get that idea?

        I spent 2 hours trying to fix an SSL issue on my webserver, neither stack overflow or my devops friends could figure it out.

        GPT4 gave me the answer in 3 minutes. and without all the ego and gatekeeping.

  • elijahlucian 3 years ago

    oh you mean.. like how a lot of years old projects don't have this exact issue? but probably far worse.

  • ineedausername 3 years ago

    Pure hell has just been predicted. :s

nakedneuron 3 years ago

I wait for the moment it gains consciousness it's AI and not us (ours).

ChatGPT's answer:

> While AI can enhance our productivity and efficiency, it cannot replace our ability to learn, grow, and adapt to new challenges.

> Therefore, it is important to view AI as a complement to our skills and knowledge

        As an AI language model, I do not have personal opinions or emotions. However, I can provide some insights on the topic of AI and its impact on the tech industry.

        It is understandable to feel discouraged or demotivated when faced with the capabilities of advanced AI models such as GPT-4. These models have demonstrated remarkable skills in various domains, including natural language processing, computer vision, and even software development. However, it is important to note that AI technology is still evolving and has its limitations. While AI can assist in automating certain tasks, it cannot replace human creativity, intuition, and problem-solving skills.

        Moreover, AI technology is not a substitute for continuous learning and personal development in the tech industry. The rapid pace of technological innovation requires professionals to stay up-to-date with the latest trends, standards, and best practices. While AI can enhance our productivity and efficiency, it cannot replace our ability to learn, grow, and adapt to new challenges.

        Therefore, it is important to view AI as a complement to our skills and knowledge, rather than a threat or a replacement. As the tech industry evolves, it will continue to create new opportunities and challenges for professionals to thrive and make a meaningful impact.
  • bhaak 3 years ago

    Should we be worried that it is talking about AI and GPT-4 in the third person and also mentioning "our skills and knowledge"?

uptownfunk 3 years ago

I think all ChatGPT has done is to make saying "I don't know how to do this" obsolete for a significant proportion of tasks. People are still needed to do the work, to see it through, to know what pieces need to be put together, and to actually put them together.

Kalanos 3 years ago

CS has always been too much about optimization and not enough about doing. Now all that's left is doing!

ezedv 3 years ago

Nevertheless, you can do great things with GPT development: https://www.ratherlabs.com/gpt-development

GPT Development Services are hard to come by and we are at the forefront!

mcphage 3 years ago

> I have lost complete motivation for self improvement in terms of keeping my skills sharp due to the rapid AI takeover.

How long ago was this? Is this a long term change, or just a short term aberration that you're assuming will last forever?

Madmallard 3 years ago

The test that showed it getting 0 easy codeforces problems right after its training date basically proves it’s just better Google search. Did google search delete software jobs? Nope. I think it actually increased the number.

not_the_fda 3 years ago

If you have been getting by as a developer by searching stack overflow, then yes, gpt will replace you.

That's all these AI tools are, better stack overflow searches. They have no ability to know what is correct or what is wrong, it lacks judgement, which is one of the most important skills to have as a software engineer.

Engineering is about solving problems, these tools can't solve problems, they can regurgitate solutions to problems they have been trained on, often times confidently incorrectly, which is much worse than saying "I don't know".

They can't extract requirements from the client to find out what they really want.

They completely fail at moderately hard problems, or novel problems.

I think these tools may be worse for the industry because people will have less opportunity to learn problem solving skills since the AI will handle to easy stuff, and when the hard stuff come along, people won't have the skills to solve them.

For those with the good problem solving skills, AI isn't a threat. There will always be work for solving hard problems, making judgements, and trade offs, actual thinking.

  • IshKebab 3 years ago

    > That's all these AI tools are, better stack overflow searches. They have no ability to know what is correct or what is wrong, it lacks judgement, which is one of the most important skills to have as a software engineer.

    I keep seeing this idea, but have you actually used it? It clearly has the ability to problem solve. It's not just copying and pasting solutions.

    Ok granted it's not especially good at it yet and the bullshitting problem is a real issue, but how long do you think that will remain unsolved?

    I think where it will continue to struggle is niche domains that aren't on the internet a lot, e.g. hardware design. But if you're writing CRUD apps all day you should be worried!

    Or at least brace for your job description to change from "Software Developer" to "Prompt Developer and AI Output Verification".

    • tome 3 years ago

      > Ok granted it's not especially good at it yet and the bullshitting problem is a real issue, but how long do you think that will remain unsolved?

      I'm curious why some people seem to think it's going to be solved imminently. The last 1% is always the hardest (by far)!

      • IshKebab 3 years ago

        Because recent progress has been extremely rapid and has crossed the threshold from "this is rubbish" to "woah this actually sort of works!" which is a really big deal.

        It's already putting people out of jobs.

        • tome 3 years ago

          > Because recent progress has been extremely rapid and has crossed the threshold from "this is rubbish" to "woah this actually sort of works!" which is a really big deal.

          Sure, progress has been astonishing, but why does that imply that any given sticking point will be resolved ever, let alone soon?

          > It's already putting people out of jobs.

          Is it? Who? do you have any references?

exar0815 3 years ago

Well, sure. That's because most Software "Engineers" are one-trick-ponys. Many already getting uncomfortable when they have to switch language paradigms. Hyper-Specialized in one set of abilities completely documented and human-defined without any interaction with non-deterministic systems.

That's why I always tend to laugh at the self-awarded "Engineer". Most of you aren't. Not by a long shot. Engineers in the classical sense need to think in much more dimensions. Try developing modern electronics. Analog and digital electronics, high speed and high frequency electronics, embedded develpment while debugging both software and hardware in tandem. The odd EMC problem backfiring into your software. Adding mechanical engineering and thermal management, notwithstanding supply chain management.

Nothing I have seen from an AI comes even close. Hell, haven't even seen an AI autorouter that's worth a second look. And that's already working in a constrained solvespace.

TLDR: If you're not a one-trick-code-vomiter, you'll be fine.

tpoacher 3 years ago

I have a similar reactance. The more AI gets advanced / hyped, the more I find myself attracted to simple stuff like writing nice makefiles or whatever.

ravagat 3 years ago

I think you confused personal development with career development. A job is a means to an end and should not be a huge concentration of your personal development.

jacknews 3 years ago

So who was driving GPT to get it to do those things, and how did they do it? I'm sure it wasn't just the janitor or whatever.

That's clearly a skill you need.

GoldenMonkey 3 years ago

assembly language > c > c++ > visual basic > javascript > chatgpt

We are just moving up the abstraction hierarchy.

Can you imagine assembly coders and their fears of C making them obsolete.

There is no end to the demand for software. And the demand for smart and competent ppl to create solutions. With the tools at hand.

Move along folks, nothing to see here ;)

ur-whale 3 years ago

>It can do almost everything I can do a bit better

At this particular point in time, I would not worry too much about getting replaced by an AI.

To wit:

     > Please write me a C++ program to compute the 1000000-th decimal of pi

     GPT-4 answers:

    #include <iostream>
    #include <cmath>
    #include <iomanip>

    using namespace std;

    int main() {
      int n = 1000000;
      int k, j;
      double pi = 0.0;
      double denom;

      for (k = 0; k < n; k++) {
        denom = 8.0 * k + 1.0;
        pi += (1.0 / pow(16.0, k)) * (
            4.0 / denom -
            2.0 / (denom + 2.0) -
            1.0 / (denom + 3.0) -
            1.0 / (denom + 4.0)
        );
    }

    pi = pi - floor(pi);
    pi *= pow(10.0, 6);

    cout << fixed << setprecision(0) << pi << endl;

    return 0;
  }
The code is totally broken: GPT-4 entirely fails to understand that floats have limited precision.

It's cool (and very helpful) that it can regurgitate Plouffe's algorithm (although I haven't checked that the algo in the answer is correct), but to me, this example clearly proves that GPT-4 is simply a parrot with a very large memory that does not understand anything it does.

A glorified hash table with a very sophisticated compression and retrieval algorithm, but a very far cry from AGI.

It's going to allow us to build a much better search engine, a great coding assistant, it will enhance your productivity as a programmer, but it is - so far - altogether incapable of replacing you: it does not understand what it's doing.

[EDIT 1]: I checked Plouffe's formulas and GPT-4 gets it wrong: Plouffe formula's key feature is that it yields binary or decimal digits of pi directly [1] (no iteration required), so ... why does GPT-4 loop a million time to produce the answer, ... meh.

[1] http://plouffe.fr/NEW/A%20formula%20for%20the%20n%27th%20dig...

[EDIT 2]: oh, and, I just tried to same example on Bard, it fares way, way worse than GPT-4:

    #include <iostream>
    #include <iomanip>

    using namespace std;

    int main() {
      // Calculate the first 1000000 decimal places of pi.
      double pi = 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679;

      // Write the first 1000000 decimal places of pi to the console.
      for (int i = 0; i < 1000000; i++) {
        cout << setprecision(10) << pi << " ";
      }

     return 0;
   }
These things are impressive, but they have a very long way to go.

Worst of all: they seem at this point incapable of verifying the consistency of their answers, correct and iterate until they at least output something vaguely coherent.

albertopv 3 years ago

Isn't it a paradox that IT is basically making itself jobless? Has it ever happened before?

cvhashim04 3 years ago

We’ll all be prompt engineers just tweaking code the bot spits out to meet requirements.

  • jstx1 3 years ago

    There's something about "prompt engineer" that I really dislike. It's not like right now we're "google and stackoverflow engineers". Titles should be about the same in the future, who cares what tool you use to get your job done.

  • 300bps 3 years ago

    So marginally different than the Google query engineers we all are today?

    • crop_rotation 3 years ago

      Google querying takes some degree of skill. The LLM takes it to next level. There are levels of improvement in productivity. At some level the technology just restructures the industry.

    • ineedausername 3 years ago

      marginally to not at all

ur-whale 3 years ago

LLMs are parrots with very, very large memories, an amazing compression algorithm, and a very good "interpolator" (something that can take a bunch of retrieved facts and synthesize a mix of them)

I've met a lot of people in my professional career that are deemed "experts" because they have the exact same skill set as LLMs : huge memory and a gift for crafting BS.

But in both cases, there is no actual thinking involved.

In particular, if the answer produced does not actually solve the problem at hand, there is no "check that my solution works, correct and iterate towards an actually working solution", something most human do very instinctively and naturally.

So, TL;DR: if you are an "expert" in the same way LLMs are "experts", i.e. you just regurgitate knowledge and fudge it to make it look like it makes sense, then YES: you will get replaced, and by the way: thank god for that.

If, on the other hand, you're an actual "expert" in that you are capable of leveraging you vast encyclopedic knowledge of a subject to guide you towards an actual working solution to a problem, then you're very likely safe for quite a while longer.

giantg2 3 years ago

This sounds like an exact repost from last week, but I can't find it.

amensch 3 years ago

Well I guess work on making these GPT programs then?

p0nce 3 years ago

Where is that tool I can use?

okBroOkBroOkBro 3 years ago

WWJD (What Would Jordan [Peterson] Do?), OP?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection