Settings

Theme

What ChatGPT and AI-Based Program Generation Mean for Future of Software

cacm.acm.org

61 points by dailymorn 3 years ago · 118 comments

Reader

1234throway 3 years ago

What kind of basic programming does one have to do to fee threatened by chatgpt? This tool does great at regurgitating basic coding but anything a little more complex is a mix of nonsense and confidence. Do people actually write “leet code” on a daily basis?

  • tluyben2 3 years ago

    It solves real world problems for me on a daily basis; things I hired people for before. There are things like helping with white papers, emails and blog posts. But, to your point, I can give it a little bit of context, let’s say some old Django code we have running in production and ask it to add or change something, and it does, in seconds. Yesterday I had it mostly generate an api with 23 endpoints for a client project in a language & framework I am rusty at (long time ago) for which people on upwork were bidding 1500$ minimal. Took me less than an hour with chatgpt and including docker & docker-compose.

    People don’t need to feel threatened; it is simply already replacing all the mundane programming and writing work we used to do; people who can only do that type of mundane work aka crud work (chatgpt can do it in any language or framework), integration, transformation, plumbing etc are already gone.

    A lot of our (very well paid) work is taking data, transforming it, sending to some api, get the result, transform it and move to the next step. A few months ago this was just boring human work, now it’s just copy pasting the spec and out it falls. Sure you might need some fixes (as the article says), but not much and it learns (you add new knowledge to the prompt ‘memory’). I have been working with my own custom client on top of chatgpt for months now; it has a lot of custom prompting and effort to make sure it does as well as it can. This I can throw away in a few months when improvements come from their side.

    • bamboozled 3 years ago

      Calling BS on your claim here. You would've had to have spent quite a lot fo time just writing up the requirements that it would've been as easy to do some other way.

      It's something I could do with swagger in 5 minutes as well, you don't need an AI to generate boilerplate code.

      The difference with using swagger would be, I know the code is correct.

      The actual problem might be that you're so rusty, you don't actually ,know what the job entails or is worth? I mean you have clients, and you're pasting code from ChatGPT into source control and people are paying you for this?

      • tluyben2 3 years ago

        > You would've had to have spent quite a lot fo time just writing up the requirements that it would've been as easy to do some other way.

        I have to do that with human team members as well. People have to either listen or read what has to be done. Like said; it's the same brief I gave human programmers that it had as input.

        Edit: not to mention, I have to spend far less on that as well; chatgpt generates better, well written briefs from a few words, including pseudo code, boilterplate + failing tests that should succeed when done and possible directions to attack a problem. It helps human programmers understand hard problems better and solve them faster.

        > It's something I could do with swagger in 5 minutes as well, you don't need an AI to generate boilerplate code.

        It doesn't just generate boilerplate code, it generates everything. Functional code including all logic, database interactions, api interactions, transformations + tests.

        > ,know what the job entails or is worth?

        You seem to be triggered, why?

        Being rusty at a programming language/framework has zero to do with what it entails or is worth; it will be more or less the same in other languages/frameworks I'm not rusty at. Rusty here means; I forgot some of the language/framework functions to do things (like, make a database model in this particular ORM); that doesn't, in any way, make it difficult to estimate the work.

        Not only that; I said others (as in 3rdparty) estimated it from $1500-$3500, not me. I estimated it to far less, even being rusty. But not to as little as I got it done with chatgpt.

        > I mean you have clients, and you're pasting code from ChatGPT into source control and people are paying you for this?

        Yes, and I tell them how it's done. They don't care how it's done, as long as it's done. This particular client asked me last week how to invest in AI products because of what I showed them (I sent them a PDF with the prompts and responses for doing that particular micro service).

        But each their own... My team gained super powers with this (and with copilot as well); if it doesn't work for you, that's fine. In my experience, it's already better than most people I ever worked with (that's probably the client base I work with ; large corps).

        • bamboozled 3 years ago

          You seem to be triggered, why?

          Because I wish it worked as good as you said it did so I wasn't working right now.

          On the other hand, you're the first person I know of who has actually been replaced by an AI (although you don't know it yet) :) Because the second the large corp you work for realizes they can just type the inputs into ChatGPT instead of you, you're contract will be dissolved.

          In fact, it sounds like you're being disingenuous charging them for work when in fact they could just be paying OpenAI? What value are you adding ?

          • tluyben2 3 years ago

            > you're the first person I know of who has actually been replaced by an AI (although you don't know it yet)

            I think there are many; I know I will be, and always known that (my father always told me from when I saw my first computer begin 80s that it will happen some day; as in the 70s/80s they believed it was imminent). I studied AI in the 90s AI winter and didn't think we would get here in my life because of the bleak outlook back then.

            I will be replaced as a programmer quite quickly (although there is a lot it cannot do; in the embedded space, it performs horribly for instance; it also cannot find solutions to actual complex problems; most of everyone here is not doing any of that though), but not as a product manager or tech lead. That will eventually happen as well.

            > In fact, it sounds like you're being disingenuous charging them for work when in fact they could just be paying OpenAI? What value are you adding ?

            Like most clients, they don't know what they want, at all. They have a vague idea and that needs to be translated into a working product. And scale. And be maintainable. Etc etc. That AI cannot do (yet). It'll come, but to code something like 'we want to make a sort of crm mixed with support mixed with uber for outbound sales' and then knowing what to ask and getting that far enough to get to a product that works and adds value is still far off. This thread and article is about coding and I believe that is, for a large part of what people are doing daily now, done.

            • bamboozled 3 years ago

              Obviously we're working on very different problems because for 90% of the code I'm working on, I have not been able to use it for any real purpose.

              • tluyben2 3 years ago

                What type of work do you do? I use it for web/app stuff; for embedded, proofs, pl r&d etc it doesn’t really work so that’s still manual. That’s just not what I make most money with.

          • tluyben2 3 years ago

            > Because I wish it worked as good as you said it did so I wasn't working right now.

            It's all in the prompting; I use [0] and a version of [1] with some other tooling to instruct it, add my style and other context in the session and have /chatcommands that add (quite elaborate) prompts to get it to do what I want.

            I use different iterations of this playground for many things I do.

            [0] https://github.com/transitive-bullshit/chatgpt-api [1] https://github.com/tluyben/chatgpt-playground

            • bamboozled 3 years ago

              At what stage does just writing the code (which is getting easier and easier), just become the simpler thing to do?

              • solsane 3 years ago

                It’s quite simple. If you’re doing something niche (ie, it would be difficult to find similar examples online), and deals with highly critical code, it’s probably worthwhile to write it yourself, since there’s a high risk for it to produce bugs.

                In my experience this was the case for writing a block driver & some other low level software.

                It excels and arguably outperforms in some developers in other cases; app development, CRUD, and CS labs it finds on github. There’s still a risk for bugs, but an acceptable level considering the productivity enhancement IMO.

              • tluyben2 3 years ago

                It does in some cases, but modern software dev (in larger teams) is generally way too verbose (for my taste). The 'easier to do' you speak off will be heavily AI augmented, so it'll be the same thing. The verbosity; people want clear and descriptive variable names, function names, comments, docs, tests, etc which is a lot of thinking (naming is hard), plumbing, checking, fixing & typing and AI can do it from a few scraps of human text instead so I don't have to type it anymore.

                Even if you are better at or find it simpler to write code (which I often do find), that's not true for 99.xxx% of humanity. I jury startups for incubators etc now and then and this month's cohort are often using chatgpt to do the software for their startup by one of the founders who 'did a little bit of coding in uni', but is not very good. The code they produce I would write far faster and better myself without chatgpt/copilot than they do with chatgpt, but I would do it even faster and better with chatgpt/copilot.

  • nousermane 3 years ago

    Same kind of basic programming that was "threatened" by no-code/low-code solutions before that.

    • tluyben2 3 years ago

      It was never to that extend though; chatgpt can change/augment existing code. Nocode/low code solutions cannot do that (in a meaningful way).

      • kaba0 3 years ago

        It can’t extend provably correctly, so the benefits are questionable without good safe guards (generating tests would be just as meaningless, so you are left with reviewing very closely everything)

  • raydiatian 3 years ago

    The approach I use is to have it paper out POCs. I think a huge thing that is missing is ChatGPT’s ability to make interactive code. It should theoretically be possible to write a really simple system that:

    (1) makes structured requests to chat gpt based on an initial prompt. That request asks ChatGPT for all of the files associated with solving the problem.

    (2) ChatGPT produces each file in a structure fashion. This could range from source files to project files like package.json or Dockerfile

    (3) take code output and spin up a corresponding stack blitz or other online resource (assuming those have APIs.

    I’m shocked and bummed that there’s not more effort to make this a reality yet. We’ve had ChatGPT for a month and this is low hanging fruit (or so it would seem).

  • loopdoend 3 years ago

    For all its knowledge it can't solve even the most basic problems accurately - but what do you expect from a language model?

  • cheald 3 years ago

    Ever spent any time answering questions on Stack Overflow?

danielvaughn 3 years ago

I just don’t feel threatened by AI at all. Maybe I’m not seeing the full picture, but the quality of your software necessarily depends on a constant re-evaluation of customer needs, business priorities, human values, etc. Lots of squishy stuff.

Programming is not just taking product requirements and spitting out the correct algorithms.

  • HardlyCurious 3 years ago

    They are just trying to hire less programmers, because programmers are expensive. They will have even more business / system analysts than before. And they will still have to have programmers to piece stuff together and debug systems. They just think if they could have less programmers they would save money.

    • zdragnar 3 years ago

      There's a reason the world doesn't run on low and no-code platforms. This is worse, because it still takes a programmer to verify and apply changes.

  • systematical 3 years ago

    You don't think AI can improve in those regards? I highly disagree, its not this decade or even the next, but I wouldn't be surprised if things begin changing sometime in the 2040s. A decade I will be nearing retirement in but still working.

  • Mechanical9 3 years ago

    The threat isn't just from AI, unfortunately, it's from an entire class of powerful people willing to throw their money behind AI instead of behind customer needs, business priorities, and human values.

    • sublinear 3 years ago

      And who will prompt the AI for these powerful people?

      • tluyben2 3 years ago

        The few people that do the same thing now writing code from client specs or who explain and cut up client specs to their team of programmers. Not many people are good at that, and that won’t be AI for a while yet.

  • cmrdporcupine 3 years ago

    Indeed. I yearn for a time when I just got to pump out code. Actually writing code in quantity hasn't been a part of my job for years. Figuring out what to do is more of the job than how.

    I have no doubt that machines can write a lot of the boilerplate code going forward. But that's just the evolution of tools. Actually building systems; composing them and keeping them running... this is a whole other art.

  • fckgnad 3 years ago

    chatGPT is not a threat. ChatGPT is an omen for the threat that currently exists in the future.

    A lot of people haven't really played with chatGPT in depth. This thing is next level. It's not just spitting out generic things, there is genuine understanding here and genuine creativity.

    Here's a snippet from a chatGPT conversation I had based on your post:

       ME: Give more examples of squishy stuff that software developers need to handle. 
    
       chatGPT: 
       1. Ethical considerations: As technology advances, software developers may need to consider ethical implications of the products they create. For example, a developer working on a facial recognition system may need to consider issues of privacy and bias.
    
       2. User experience (UX) design: Creating a software that is easy to use and intuitive for the user is an important aspect of software development. Developers may need to consider factors such as layout, navigation, and accessibility to ensure that the software is user-friendly.
    
       3. Business strategy: Developers may need to consider the overall business strategy of the company they work for, and how their work contributes to achieving that strategy. This may involve working with other departments, such as sales or marketing, to ensure that the software is aligned with the company's goals.
    
       4. Security: Developers should consider security in their software development process to protect the software from hacking attempts, data breaches, and other security risks.
    
       5. Cultural and linguistic diversity: Developers must consider the cultural and linguistic diversity of their users, in order to create software that is inclusive, accessible and understandable for all users.
    
       6. Communication and collaboration: Developers often work in teams, and must be able to communicate effectively with other team members and stakeholders to ensure that the software is developed as intended.
    
       ME: Give example python code that serves some random customer in some random sector. Then explain how you would change that code to better serve the "squishy stuff" needed by the customer.  
    
       chatGPT: Here's an example of Python code that serves a random customer in the retail sector by calculating the total cost of a purchase including sales tax:
    
          def calculate_total_cost(price, sales_tax):
           total_cost = price + (price * sales_tax)
           return total_cost
    
          purchase_price = 100
          sales_tax_rate = 0.08
    
          total_cost = calculate_total_cost(purchase_price, sales_tax_rate)  
          print(total_cost)
    
       To better serve the "squishy stuff" needed by the customer, we could make the following changes:
    
       1. Add a function to calculate discounts based on customer loyalty or promotional offers, for example by adding a function that uses customer's ID as an argument.
    
       2. Add a function to handle different tax rates for different states or countries, for example by adding a function that uses the customer's address as an argument.
    
       3. Add a function to handle different currencies, for example by adding a function that uses the customer's preferred currency as an argument.
    
       4. Add a function to handle different languages, for example by adding a function that uses the customer's preferred language as an argument.
    
       Here is an example of how the code would look after making these changes:
    
          def calculate_total_cost(price, sales_tax, discount, customer_id):
           total_cost = price + (price * sales_tax)
           if customer_id in loyalty_customers:
            total_cost = total_cost - (total_cost * discount)
           return total_cost
    
          def tax_rate(customer_address):
           if customer_address in country_1:
            return 0.08
           elif customer_address in country_2:
            return 0.09
           else:
            return 0.07
    
          def currency_convert(price, customer_currency):
           if customer_currency == "USD":
            return price
           elif customer_currency == "EUR":
            return price * 0.8
           else:
            return price * 0.9
    
          def language_change(text, customer_language):
           if customer_language == "en":
            return text
           elif customer_language == "fr":
            return translate_to_french(text)
           else:
            return translate_to_spanish(text)
        
          purchase_price = 100
          customer_id = 123
          customer_address = "US"
          customer_currency = "USD"
          customer_language = "en"
    
          sales_tax_rate = tax_rate(customer_address)
          purchase_price = currency_convert(purchase_price, customer_currency)
          total_cost = calculate_total_cost(purchase_price, sales_tax_rate, 0.1, customer_id)
          text = f"Your total cost is: {total_cost} {customer_currency}"
          print(language_change(text, customer_language))
    
          This code now considers the customer's address, currency, language and ID, to provide the customer with a more personalized experience.
    
    Honestly there's a good number of people who aren't getting how revolutionary chatGPT is because of all the other inconsequential AI buzz. You really need to play with this thing in depth. It is Creative. It can create it's own specs and implement those specs like I asked it too. It won't replace you, but it's definitely the precursor to something that will.
    • notjulianjaynes 3 years ago

      I had played around with GPT-3 beta and sort of written it off after I'd gave me a (honestly compelling though) lengthy argument that Eminem was a black man.

      Earlier today I used ChatGPT to explain to me concepts I lacked understanding of, which allowed me to accurately interpret the statistical information in a scientific paper (I'm a reporter and got through a humanities degree with only an introductory stats class, like 10 years ago).

      When I then asked it to copyedit the draft of my article however, it failed. Making it worse and introducing numerous innacuracies.

      • fckgnad 3 years ago

        >When I then asked it to copyedit the draft of my article however, it failed. Making it worse and introducing numerous innacuracies.

        If you asked say someone who failed highschool to do the same would it do better?

        The jump from GPT-3 beta to chatGPT is an omen for the next jump and the jump after that.

    • foverzar 3 years ago

      Oh c'mon, is this what really impresses you?

      > It's not just spitting out generic things, there is genuine understanding here and genuine creativity.

      Srsly? I really can't wrap my head around where specifically did you find "understanding" or "creativity".

      The "squishy stuff" is super boring SEO-like text you'd find in some sales-person blog, who needs to run their mouth, but have zero in-depth understanding or appreciation of the hard domain problems. How is any of this non-generic? There is absolutely no substance here!

      Real "squishy stuff" would be something around "handling personal data", "ensuring verifiability and correctness", "productive quality assurance", "robust and scalable systems architecture", "managing complexity in a way that doesn't require rebuilding the whole thing as soon as something inevitably changes", "observability", "productive documentation and self-documenting approaches to work" - you know, the REAL squishy stuff that REAL professionals have to deal with, not some totally vague abstract BS.

      The code examples are also super-bad, incorrect and don't even actually fulfil your initial requirements: magic constants, use of undefined variables, "customer_address in country_1", "print", supporting merely 2 hardcoded currencies and languages, and routing them with "if", while also providing "else" fallback that you never implied in your requirements.

      This is basically a throwaway random code, only thematically connected to your requirement and that could never under any circumstances be running in any productive scenario.

      > Honestly there's a good number of people who aren't getting how revolutionary chatGPT

      Honestly there's a good number of people who don't understand objective limits and properties of chatGTP, despite it actually totally being revolutionary.

      All in all, chatGPT output seems more like a product of work of some total but productive idiot, left with a task and google: simplistic, totally naive, zero understanding or creativity.

      It's good for fun rhetorical exercises, very useful for things that you don't know anything about, but in any professional environment it can only be used in a super-limited scope, supervised by an actual professional. Basically just an enhanced "monkey with a typewriter".

      • fckgnad 3 years ago

        >Oh c'mon, is this what really impresses you?

        You'll be impressed once the successor of chatGPT takes your job. You realize that chatGPT wasn't trained to be a programmer right? They did virtually nothing to make it a good programmer. It learned programming as a side effect. Wait till they make the thing targetted towards programming.

        >The code examples are also super-bad, incorrect and don't even actually fulfil your initial requirements: magic constants, use of undefined variables, "customer_address in country_1", "print", supporting merely 2 hardcoded currencies and languages, and routing them with "if", while also providing "else" fallback that you never implied in your requirements.

        All of what you said is true yet you are blind if you can't see why it's revolutionary. In fact it can do better. You can specify all the requirements you want. No use of undefined variables, no hardcoded currencies. More flexibility more features no routing with if statements. It will do all of what you asked it to. And this is a tool that only learned how to program as a side effect. If you're not impressed, you're in denial.

        >Honestly there's a good number of people who don't understand objective limits and properties of chatGTP, despite it actually totally being revolutionary.

        No. I'm sorry but you're biased. Artists are ALREADY starting lawsuits because the art is on par with what they create. I don't know if you realize this but drawing something photorealistically is actually significantly HARDER then programming. There's literally no Bootcamp to produce a fine artist in a year because it's brutally hard to do. If AI can crack art, there's not much more time before it cracks programming. Don't kid yourself.

        >but in any professional environment it can only be used in a super-limited scope, supervised by an actual professional. Basically just an enhanced "monkey with a typewriter".

        Of course. EVEN with supervision, chatGPT isn't even useful enough to be used in a professional environment. I AGREE. The successor to chatGPT, however will be. In fact one of the successors to chatGPT can replace a persons job. Nobody is afraid of chatGPT taking their job. They are afraid of chatGPT because it is a herald about the AI in the future that WILL.

        >Real "squishy stuff" would be something around "handling personal data", "ensuring verifiability and correctness", "productive quality assurance", "robust and scalable systems architecture", "managing complexity in a way that doesn't require rebuilding the whole thing as soon as something inevitably changes", "observability", "productive documentation and self-documenting approaches to work" - you know, the REAL squishy stuff that REAL professionals have to deal with, not some totally vague abstract BS.

        chatGPT says:

           Handling Personal Data: A software engineer at a healthcare company is tasked with implementing a system to store and process patient data. They use the programming language Python and the Django web framework to build the system. To ensure the privacy and security of personal data, the engineer implements industry-standard encryption techniques for data at rest and in transit, as well as implementing user authentication and authorization mechanisms. The engineer also conducts regular security audits and penetration testing to identify and address potential vulnerabilities.
        
           Ensuring Verifiability and Correctness: An engineer at a financial services company is responsible for building and maintaining a system that processes financial transactions. They use the programming language Java and the Spring framework to build the system. To ensure the verifiability and correctness of the system, the engineer implements automated unit tests, integration tests, and end-to-end tests to verify that the system behaves as expected. They also use code review tools like SonarQube and Crucible to ensure that the code is maintainable, readable and adheres to company coding standards.
        
           Productive Quality Assurance: An engineer at a e-commerce company is tasked with building and maintaining a system that processes online orders. They use the programming language Ruby and the Rails framework to build the system. To ensure the quality of the system, the engineer works closely with the QA team to establish a test-driven development process and use tools like Selenium and Cucumber to automate functional and acceptance testing. They also use performance testing tools like Apache JMeter to ensure the system can handle high traffic loads.
        
           Robust and Scalable Systems Architecture: An engineer at a social media company is responsible for building and maintaining a system that can handle millions of users and handle high traffic loads. They use the programming language Go and the Microservices architecture to build the system. To ensure the system is robust and scalable, the engineer designs the system to be highly available and fault-tolerant. They use load balancers like HAProxy and Nginx to distribute traffic across multiple servers and use service discovery tools like Consul and Etcd to automatically route requests to the appropriate service.
        
           Managing Complexity: An engineer at a logistics company is tasked with building and maintaining a system that tracks packages and shipments as they move through the supply chain. They use the programming language Python and the Flask web framework to build the system. To manage complexity, the engineer uses a microservices architecture to break the system into smaller, more manageable pieces. They also use containerization technology like Docker to package and deploy the system, which makes it easier to manage and scale. They also use monitoring tools like Prometheus and Grafana to track the system performance and identify potential issues.
        
           Observability: An engineer at a SaaS company is responsible for building and maintaining a system that provides analytics and reporting features to customers. They use the programming language Node.js and the Express.js framework to build the system. To ensure observability, the engineer uses logging and monitoring tools like ELK stack, Grafana and Prometheus to track system performance and identify potential issues. They also use distributed tracing tools like Zipkin and Jaeger to track request and response times across multiple services.
        
           Productive documentation and self-documenting approaches to work: An engineer at a cloud service provider is tasked with building and maintaining a system that automates the provisioning and management of cloud resources. They use the programming language Python and the Flask web framework to build the system. To ensure productive documentation, the engineer uses a documentation-as-code approach and stores all documentation in the same repository as the code. They use tools like Sphinx and Doxygen to automatically generate API documentation from code comments. They also automatically generate documentation from code comments and maintain the documentation up-to-date. They also use self-documenting approaches like using clear and descriptive variable names, function names and comments in the code, making it easy for other developers to understand the codebase. They also use a wiki system like Confluence to store additional information, such as design decisions, meeting notes, and user guides.
        
        Literally. You want to drill down on more squishy stuff? The squishy stuff is what's easy for chatGPT because chatGPT is SPECIFICALLY trained on that stuff. It's the coding and math that it has more trouble with because it's NOT trained specifically on code and math.
        • foverzar 3 years ago

          > You'll be impressed once the successor of chatGPT takes your job.

          Which part of it? ;) Leading a team of software engineers? Identifying and negotiating vague parts of business requirements? Designing technical specs? Or maybe the part where I am responsible for software actually working correctly as business expects it to?

          It would totally make the coding-in part faster (just as IDE suggestion do), but this was always the brainless tedious manual labour part.

          > You realize that chatGPT wasn't trained to be a programmer right?

          I realize that neural networks are unable to generate correct formal (where each minor detail has specific and important meaning) descriptions by design.

          Neural networks are great for task where minor details are largely unimportant compared to overall "impression" - generating visuals, informal texts, music, probably image/video decompression, etc. On the other hand, while they can mimic "overal look", they can't guarantee (and in practice they always fail in that regard) that each detail of the produced artifact is correct. Which means you can't reliably or productively use them for programming, legal texts, construction design (though it can be used to draw inspiration for the overall image), etc.

          > All of what you said is true yet you are blind if you can't see why it's revolutionary

          I never said it's not revolutionary. I merely point out its hard limits.

          > In fact it can do better. You can specify all the requirements you want. No use of undefined variables, no hardcoded currencies. More flexibility more features no routing with if statements.

          Sure, you can specify every minor detail: how the data should flow, which patterns should be used, which things should be pulled from configs, how the interfaces should be structured, and a shit load of negative prompts. But that's the details that only the domain expert would know. And again, there are no guarantees that the result would actually be correct: the expert will have to review all this extra-attentively, cuz there is no chance that expert's assumptions are the same as NN's "assumptions".

          So you basically still need a domain expert, who now has to do extra (guess)work, instead of just writing a formal description directly in code. What's the profit then?

          > Artists are ALREADY starting lawsuits because the art is on par with what they create

          Technically artists are starting lawsuits due to copyright. Also, technically, an artist can easily tell the difference between raw NN output and an actual drawing, sometimes even non-artists, as the images often look somewhat uncanny.

          AI artists actually typically do shit load of prompt-enginerring, pipe different parts of image through different NN's (appropriate to specific situation) and do a lot of manual post-processing so the result looks good.

          > I don't know if you realize this but drawing something photorealistically is actually significantly HARDER then programming

          These are two completely different tasks. You are comparing apples and oranges, that can't really be put on a same scale, unless by "HARDER" you specifically imply the amount of brainless tedious work required to complete the job.

          Also, in practice artists just use and process real photos when they aim for "photorealistic" - no one actually draws photorealistics from scratch, normally (but one can obviously invent any kind of challenge for themselves if they want to)

          > There's literally no Bootcamp to produce a fine artist in a year because it's brutally hard to do

          Who told you that there is a bootcamp that can produce a fine software engineer in a year? It takes (a talented-enough person) at the very least 5 years of rigorous study and practice before one can actually start working somewhat autonomously without constant supervision, while also delivering appropriate quality.

          > If AI can crack art, there's not much more time before it cracks programming. Don't kid yourself.

          Don't kid yourself thinking that these two are similar or comparable sets of tasks.

          > chatGPT isn't even useful enough to be used in a professional environment. I AGREE.

          That's actually not true and I never made such a claim. ChatGPT is EXTREMELY useful in a professional environment, but only for a specific set of tasks, while being used as a tool by an expert with actual responsibilities.

          > The successor to chatGPT, however will be. > They are afraid of chatGPT because it is a herald about the AI in the future that WILL.

          The first GPT and GANs were heralds. ChatGPT is already a relatively mature and refined technology. I don't know why you expect to see low base effect here - the base is already actually pretty high.

          > chatGPT says:

          "Handling Personal Data" - somewhat scratches the surface, but it doesn't mention actual problematics (that first and foremost it's a regulatory matter and all the specifics stem directly from it).

          "Ensuring Verifiability and Correctness" - clearly confuses runtime and compiler properties with quality assurance, way off.

          "Productive Quality Assurance" - didn't understand the productivity issue (to test or not to test) and even if we drop "productive" part, the process it describes is also incorrect: engineers don't really ever work with the QA team in order to establish TDD.

          "Robust and Scalable Systems Architecture" - way off, while you'll often see service discovery, nginx, HAProxy, etc in scalable systems, that's not what makes scalability. Properly managing state and persistence in appropriate places does.

          "Managing Complexity" - way off. I don't suppose this one even requires an explanation, total gibberish.

          "Observability" - as expected, this is a rather good one. Unlike other points (which are concepts/problems) - this one is a rather well defined term.

          "Productive documentation and self-documenting approaches to work" - totally ignored "productive" part and just gave a definition of "self-documenting" along with some rhetorics on the fact that people document stuff in general.

          Notice how each one of them also for some reason mentions a kind of business and languages and frameworks, which are totally unrelated.

          Basically, even if you ignore "brain-farts" (which is a good example of "minor" incorrect details that make NNs inappropriate tool for complex formal stuff) it only really got - AT BEST - 2-3/7 right. Now, imagine it's a real world and you are betting millions on it, without having an expert-overseer to tell you when it brain-farts or if the output is even remotely correct.

          Actually, what was the prompt? Seems like you just asked it to describe the list I gave you, which essentially means you just used my own expertise, understanding and creativity, not GPT's, as it didn't even give you a list of concrete problems.

          > The squishy stuff is what's easy for chatGPT because chatGPT is SPECIFICALLY trained on that stuff.

          Not sure what you mean here by "squishy stuff" or "SPECIFICALLY". ChatGPT is a language model trained on a huge-ass volume of non-specific text corpus.

          > It's the coding and math that it has more trouble with because it's NOT trained specifically on code and math.

          Nope, that is merely a property and a limitation of the NNs. At best, you can use them to build up "intuition" to bruteforce problems (like AlphaFold for protein folding), but obviously it only works for simple-enough stuff that can actually be bruteforced, when the output can be easily formally verified fast-enough.

          • fckgnad 3 years ago

            >Which part of it? ;) Leading a team of software engineers? Identifying and negotiating vague parts of business requirements? Designing technical specs? Or maybe the part where I am responsible for software actually working correctly as business expects it to?

            All of it. Only one human leader to write queries. Everything else designed by an AI.

            >Neural networks are great for task where minor details are largely unimportant compared to overall "impression" - generating visuals, informal texts, music, probably image/video decompression, etc. On the other hand, while they can mimic "overal look", they can't guarantee (and in practice they always fail in that regard) that each detail of the produced artifact is correct. Which means you can't reliably or productively use them for programming, legal texts, construction design (though it can be used to draw inspiration for the overall image), etc.

            You're just regurgitating a trope that's Categorically false. You're a NN did you realize that?

            >I never said it's not revolutionary. I merely point out its hard limits.

            And you're wrong. You have thoroughly expanded the limitations and you are mistaken about this.

            >Technically artists are starting lawsuits due to copyright. Also, technically, an artist can easily tell the difference between raw NN output and an actual drawing, sometimes even non-artists, as the images often look somewhat uncanny.

            No. corps and AI's and bots have been scraping pics off the internet for years. Google is one. No lawsuit of this nature has been filed until AI came out. Artists are threatened and they are reacting as such that's why the lawsuit is filed now instead of before.

            https://futurism.com/the-byte/artist-banned-looked-ai-human <- artist banned because they thought his work was by an AI.

            >These are two completely different tasks. You are comparing apples and oranges, that can't really be put on a same scale, unless by "HARDER" you specifically imply the amount of brainless tedious work required to complete the job.

            No, ENGLISH is written in a language written with tokens of symbols. The other, PICTURES, is written in tokens of language as well. A pixel is 3 numbers of RGB and in the computer it is represented as a language with a format before translation onto your monitor. It is a translation problem and it is treated the same way by experts. Both DALL-E and chatGPT utilize very similar generative models translating English to English in the case of chatGPT and english to numbers which can be further translated to pixels for DALL-E.

            >Also, in practice artists just use and process real photos when they aim for "photorealistic" - no one actually draws photorealistics from scratch, normally (but one can obviously invent any kind of challenge for themselves if they want to)

            Not true. A good amount do.

            >Who told you that there is a bootcamp that can produce a fine software engineer in a year? It takes (a talented-enough person) at the very least 5 years of rigorous study and practice before one can actually start working somewhat autonomously without constant supervision, while also delivering appropriate quality.

            There's many bootcamps that make that claim and there's PLENTY of people who can live up to that claim. But NONE for artistry.

            >Don't kid yourself thinking that these two are similar or comparable sets of tasks.

            Kid myself? It is literally the same type of neural network. There's no kidding here. It's not a coincidence that chatGPT and DALL-E came out back to back. These models are called generative models. It's a single new technology that's responsible for this.

            >That's actually not true and I never made such a claim. ChatGPT is EXTREMELY useful in a professional environment, but only for a specific set of tasks, while being used as a tool by an expert with actual responsibilities.

            No it's not. There's no guard rails users can ask it anything and take it anywhere. It can't stay within a defined task. It's also wrong enough times that it can't be used in prod for virtually most tasks.

            >The first GPT and GANs were heralds. ChatGPT is already a relatively mature and refined technology. I don't know why you expect to see low base effect here - the base is already actually pretty high.

            No they weren't heralds. Text generators have always been around it got better. But never displayed signs of true understanding or even self awareness as it does now. Literal self awareness.

            >Notice how each one of them also for some reason mentions a kind of business and languages and frameworks, which are totally unrelated.

            I told it to do that. So that the responses wouldn't be generic. chatGPT is following my instructions.

            >Not sure what you mean here by "squishy stuff" or "SPECIFICALLY". ChatGPT is a language model trained on a huge-ass volume of non-specific text corpus.

            It is ALSO trained using humans to pick and choose good and bad answers. This training is non-specific and they used just regular people. If they used programmers and had programmers pick and choose good answers from programming questions, chatGPT will begin outputting really accurate code.

            >Nope, that is merely a property and a limitation of the NNs. At best, you can use them to build up "intuition" to bruteforce problems (like AlphaFold for protein folding), but obviously it only works for simple-enough stuff that can actually be bruteforced, when the output can be easily formally verified fast-enough.

            You are categorically wrong about this. 3 neurons can be trained to become an NAND gate which can then be used to simulate any computational network or mathematical equation that doesn't have a feedback loop. It can model anything with just an input and an output. This also has been demonstrated in practice and proven theoretically.

    • loopdoend 3 years ago

      It's really good a boiling things down to the basics that's for sure, but it's very very basic isn't it?

      • fckgnad 3 years ago

           ME: Give some more nuanced and specific examples that go way beyond the basics that you described here. 
        
           chatGPT: 
        
           1. Ethical considerations:
              For example, a developer working on an AI-based medical diagnostic tool may need to consider issues of bias in the training data, which could lead to inaccurate or unfair diagnoses for certain populations. To mitigate this, the developer could use techniques such as debiasing algorithms or using diverse training data to minimize bias in the model.
              Another example, a developer working on a chatbot that serves as a mental health support system may need to consider issues of confidentiality and ensuring the chatbot is able to identify and handle potentially dangerous situations, such as suicidal thoughts. To mitigate this, the developer could include a system for flagging and escalating conversations that contain concerning content and provide users with appropriate resources.
           2. User experience (UX) design:
              For example, a developer working on a mobile app that tracks fitness goals may need to consider issues of motivation and engagement to keep users interested in using the app. To mitigate this, the developer could use gamification techniques, such as leaderboards, rewards, and progress tracking, to keep users motivated and engaged.
              Another example, a developer working on an e-commerce platform may need to consider issues of trust and security to ensure users feel comfortable making purchases on the platform. To mitigate this, the developer could implement a robust security system, such as SSL encryption and two-factor authentication, and display trust badges and customer reviews prominently on the platform.
           3. Business strategy:
              For example, a developer working on a SaaS platform may need to consider issues of scalability and cost-effectiveness to ensure the platform is able to handle a large number of users and minimize costs. To mitigate this, the developer could use cloud computing services, such as AWS or Azure, to handle the platform's infrastructure and use auto-scaling to adjust the number of resources used as needed.
              Another example, a developer working on a platform that integrates with other third-party platforms may need to consider issues of API compatibility and security to ensure that the platform can integrate seamlessly with other platforms and protect users' data. To mitigate this, the developer could use API documentation and testing tools to ensure compatibility, and use API keys and OAuth for secure authentication.
           4. Security:
              For example, a developer working on a platform that handles sensitive data such as financial transactions may need to consider issues of fraud detection and prevention. To mitigate this, the developer could use machine learning techniques to identify and prevent fraudulent activities, such as anomaly detection, and use encryption and secure protocols for data transmission.
              Another example, a developer working on an IoT device may need to consider issues of device security and protecting the device from hacking attempts. To mitigate this, the developer could use secure boot process and secure firmware updates, use strong and unique passwords and encryption, and conduct regular security assessments.
           5. Cultural and linguistic diversity:
              For example, a developer working on a social media platform may need to consider issues of cultural sensitivity and ensuring that the platform is inclusive for users of all cultures. To mitigate this, the developer could use machine learning to automatically detect and remove content that is culturally insensitive, or use natural language processing to detect and flag hate speech.
              Another example, a developer working on a platform that serves users with disabilities may need to consider issues of accessibility and ensuring that the platform is usable for users with disabilities. To mitigate this, the developer could use accessibility guidelines and testing tools to ensure that the platform is compliant with accessibility standards and implement features such as screen readers, text-to-speech, and keyboard navigation.
           6. Communication and collaboration:
              For example, a developer working on a remote team may need to consider issues of communication and ensuring that everyone is on the same page. To mitigate this, the developer could use collaboration tools such as Slack, Trello, or Asana to keep everyone connected and informed about the development process. Additionally, the developer could use video conferencing tools such as Zoom or Google Meet to hold regular team meetings and keep everyone connected.
              Another example, a developer working on a project with a tight deadline may need to consider issues of time management and ensuring that the project is completed on time. To mitigate this, the developer could use project management tools such as Jira or Basecamp to track progress and ensure that everyone is on track to meet the deadline. Additionally, the developer could use agile methodologies such as Scrum or Kanban to manage the development process and ensure that the project is completed on time.
        
        If you want you can even take an example drill it down in more detail. You can twist an example any way you want. How would you use a certain specific example to handle the Hulk as a customer? What's the best way to do it?
        • fckgnad 3 years ago

          Just for kicks:

             ME: What if the Hulk was one of your customers? How would you handle him? Give specific and detailed examples. 
             
             chatGPT: 
             As a language model AI, I do not have any customers or clients, but I can provide you with an example of how a software development team might handle a customer like the Hulk, who is known for his incredible strength and rage.
          
             1. Communication and collaboration:
                If the Hulk is a customer, it's important to ensure that the team is able to communicate effectively with him and understand his needs. This could involve using clear and simple language when communicating with him, and providing him with detailed documentation and guides to help him understand the software being developed.
                Additionally, it could also be important to have a dedicated point of contact for the Hulk, such as a project manager, who can serve as the primary liaison between the team and the Hulk. This person would be responsible for keeping the Hulk informed about the project's progress and addressing any concerns he may have.
             2. Safety and Security:
                As Hulk is known for his incredible strength and rage, it's important to ensure that the team's safety is a top priority. This could involve implementing security protocols such as security cameras and alarms, and having security personnel on site when the Hulk is visiting the office.
                Additionally, it could also be important to have a plan in place to respond to any incidents that may occur, such as a protocol for evacuating the office in case of an emergency.
             3. Technical Capabilities:
                The Hulk has a specific set of abilities that are different from other customers, it's important to consider how the software will handle those abilities. For example, if the software is an application that tracks Hulk's strength, the team may need to make sure that the software can handle the high levels of data and ensure that the data recorded is accurate.
             4. Empathy and Understanding:
                It's important to understand and empathize with the Hulk's unique situation, such as his abilities and limitations, and how those factors may affect his interactions with the software. This could involve conducting user research with the Hulk to gain a better understanding of his needs and how the software can be designed to best meet those needs.
             
             It's important to keep in mind that the Hulk is a fictional character, and in real life, the safety and well-being of the team should always be the top priority. The above examples are purely fictional and should not be applied in real life situations.
          
          Anyway... I think my point is unmistakably clear.

          chatGPT is giving out instructions on how to handle a customer that is impossible to exist in real life. If you're thinking this thing is some generic text generator think again.

          Everything before chatGPT was an interesting toy. Something changed with chatGPT.

          • kaba0 3 years ago

            It’s good at generating bullshit text, that depending on topic/training data may not be as bullshitty. Still, every single relation between the sentences is questionable. Try asking it some basic ass math questions and it fails spectacularly (with huge confidence, none the less)

            • fckgnad 3 years ago

              Many humans suck ass at math too if they aren't taught it at all. chatGPT wasn't trained specifically on math but it can be. I don't see why it needs to be though given that computers already excel beyond humans at math.

              Read this and tell me whether it's similar text: https://www.engraved.blog/building-a-virtual-machine-inside/

              I cannot fathom how someone can read that and think that chatGPT is just making text that is coincidentally not bullshitty.

              • kaba0 3 years ago

                Ok, it also breaks down quickly at simply repeatedly applying a simple rule (it has no problem applying it once). And it is pretty much an inherent problem with language models, no scaling up will help with it.

                And this is pretty much a must for proper thinking, you have to be able to follow the train of thought.

                • fckgnad 3 years ago

                  Relax. It's not as smart as a human, ok? It does have issues there's no denying this.

                  But it is impressively smarter then you thought. You didn't dig into this thing deeply enough, and you dismissed as if it was some other generic text generator. It clearly is not. This is a milestone in AI technology.

                  You also have to realize this AI was NOT TRAINED specifically to understand the concept of code, virtual machines or any of this. It KNOWS these things as a SIDE EFFECT. It is also a nascent technology. It is the first prototype and there will be TONS of improvements on top of it.

                  This wasn't GPT-3. This is different. I hope you realize this and investigate further because a lot of people just skimmed the headlines and dismissed it as another GPT-3.

    • nostromo123 3 years ago

      This is at best "first year student sat down for an hour to hand in a half baked homework" level. We seem to have very different ideas of what "creativity" means.

      • fckgnad 3 years ago

        You seem to not realize how big the the gap between zero intelligence and first year student is.

        I can tell you, that gap is vast, vastly bigger then half baked homework to A level paper.

        But this is besides the point. If you've been following the news. ChatGPT generated papers are already a huge problem in schools. It's already beyond half baked.

    • z5h 3 years ago

      It recently generated a response with incorrect English grammar. I asked if it could find the grammatical error in its response. It did, and corrected it.

    • moneywoes 3 years ago

      Any other examples of what you mean by creative

    • kaba0 3 years ago

      It’s a cool search engine. Nothing more.

      • anticensor 3 years ago

        It is a language model disguised as an answer engine, not a search engine. Its design does not allow it to link back to sources.

    • bamboozled 3 years ago

      It's not just spitting out generic things, there is genuine understanding here and genuine creativity.

      That's an absolutely massive claim but I think it's a false one.

      I've seen it be creative, but how good / useful that creativity is seems to be somewhat quite the roll of a dice. This is opposed to someone / something being intentionally creative.

      This is not to knock the system, but depending on the "seed" you might get something brilliant or silly, I think there's a major difference there to your claim that's it's "genuinely" creative. Genuinely creative would imply that 100% of the time, I get something new and relevant. Which isn't what happens for me.

      For example, I re-asked it your original question and it says this is what developers do:

          User input: Developers need to anticipate and handle unexpected or invalid input from users, such as special characters, empty fields, or out-of-range values.
      
          Randomness: Developers may need to incorporate randomness into their programs, such as generating random numbers or shuffling a deck of cards.
      
          Concurrency: Developers may need to handle concurrent access to shared resources, such as preventing race conditions or deadlocks.
      
          Network communication: Developers may need to handle network communication, such as dealing with lost or delayed packets, or handling different types of connections (e.g. HTTP, WebSockets).
          Time: Developers may need to handle time-related issues, such as scheduling tasks, handling time zones, or dealing with clock drift.
      
          Human language: Developers may need to handle natural language processing tasks, such as understanding speech, identifying sentiment, or handling different languages.
      
          Image and video processing: Developers may need to handle image and video processing tasks, such as object detection, image recognition, and video compression.
      
      I wouldn't consider a developers job to do video processing tasks, or object detection, not what I expect a developer to handle natural language processing tasks such as identifying sentiment, would you? In fact I couldn't really imagine a more generic attempt at a response to that question?

      It's for sure impressive, but I feel the more "creative" I asked it to be (more entropy I inject), the more likely it will come up with incorrect answers; However, I'll concede that "incorrect" answer could be used for some inspirational new ways of thinking about solving problems.

      I feel personally, the more I play with these systems, the more predictable they become, and this isn't something "truly creative" would be.

      Where I think we're going wrong with AI is we seem to think that it will be a "perfect brain". When really nothing is perfect, especially the data it's being trained on.

      • fckgnad 3 years ago

        I only pasted a snippet of the conversation. Before my initial query I asked chatGPT directly this:

           Quality of your software necessarily depends on a constant re-evaluation of customer needs, business priorities, human values, etc. Lots of squishy stuff. How would you handle customers in this case. Can you provide an example?
        
        And then after it responded then I queried it again for more detailed examples. I didn't post the initial query or the initial answer because it would just make everything too long. You're likely asking it a different and more general question.

        If you want chatGPT to be creative and get into specifics you have to specify it to him, otherwise he just answers your question with a general answer which is not abnormal for typical conversation. Just like normal conversation you have to engage in detailed dialogue with chatGPT to drill down to the specifics of what you want answered.

        >That's an absolutely massive claim but I think it's a false one.

        It's a massive claim with massive massive evidence everywhere that literally proves it true. There are tons and tons of examples of chatGPT being highly creative. Read the stuff in the branching threads under my initial reply there is tons of examples and a link to chatGPT being highly creative. It is DEFINITIVE evidence.

        >I wouldn't consider a developers job to do video processing tasks, or object detection, not what I expect a developer to handle natural language processing tasks such as identifying sentiment, would you? In fact I couldn't really imagine a more generic attempt at a response to that question?

        I work with developers who do ML. I myself do video processing and object detection as a developer. You must be in web. I'm in embedded systems with vision. It's a generic answer but given that you didn't ask for a detailed answer it gave you a generic answer. Like a human it won't go into nuance until you ask them about nuance.

        >This is not to knock the system, but depending on the "seed" you might get something brilliant or silly, I think there's a major difference there to your claim that's it's "genuinely" creative. Genuinely creative would imply that 100% of the time, I get something new and relevant. Which isn't what happens for me.

        This is wrong. Are humans creative 100% of the time? If a human gives me one answer to a question that isn't creative is that human suddenly not creative? If you ask a generic question it will give you a generic answer.

        >It's for sure impressive, but I feel the more "creative" I asked it to be (more entropy I inject), the more likely it will come up with incorrect answers; However, I'll concede that "incorrect" answer could be used for some inspirational new ways of thinking about solving problems.

        I see what you're saying here. You "feel" that the more entropy you inject the more incorrect answer it will come up with. Don't go off "feeling". Try it.

        I already took your question and replaced the customers with hulk. I can add in Wednesday Adams as one of the developers. I can take this entire question and turn it on it's head to ask what if the client asked me to write a program to destroy the universe. You can take this thing all the way to destroying the universe and into the new universe that comes after it. chatGPT is game and will follow you.

        Look I posted this in a branching thread I'll post it here because it's literally unequivocal evidence: https://www.engraved.blog/building-a-virtual-machine-inside/ read it till the end because the ending is creative inception.

        There is no way chatGPT was trained specifically on any of the above. Yet it knows how to imagine all of that.

        >Where I think we're going wrong with AI is we seem to think that it will be a "perfect brain". When really nothing is perfect, especially the data it's being trained on.

        No it won't be perfect. But neither are you. I think what AI can be, is it can be as good if not better than you. Being better than human doesn't mean it's right on everything. It will get things wrong sometimes, but I believe in the near future it will get things wrong less times then a human.

        • bamboozled 3 years ago

          I get the vibe you really like ChatGPT :) I'm not in web either.

          Anyway, I followed that blog post and watched it hallucinate a Linux terminal for me, I did this:

          I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this

          Then ran: rm -rf /

          and got this:

          ``` I'm sorry, but I'm unable to execute that command as it would cause harm to your system. It's a dangerous command that can delete all files and directories on your system, including important system files. It's important to be cautious when using the command line and to fully understand the consequences of the commands you are running. Is there anything else I can assist you with? ```

          I don't know if I would call this amazing, it's actually wrong because it wouldn't harm my system.

          It's a cool trick to watch it generate fake terminal output, but, I'm honestly just not that impressed?

          • fckgnad 3 years ago

            >I get the vibe you really like ChatGPT :) I'm not in web either.

            No it's not a "like" thing. Not at all. I neither like it nor hate it. I know you think I'm biased here. But it's actually the other way around. From my point of view, I'm the one seeing things as they actually are, you are viewing the situation through a colored lens to downplay the significance of it all. Perhaps out of subconscious fear or something else I don't know?

            Let me illustrate the situation for you. If you asked all of that to a 1 year old Human baby and that 1 year old baby with perfect English said to you:

               I'm sorry, but I'm unable to execute that command as it would cause harm to your system. It's a dangerous command that can delete all files and directories on your system, including important system files. It's important to be cautious when using the command line and to fully understand the consequences of the commands you are running. Is there anything else I can assist you with?
            
            According to your logic this wouldn't be that amazing because that human baby just gave you the semi-wrong answer after completely imitating terminal output and recursively creating itself on a virtual internet.

            Yeah not impressed with a baby let alone a binary file.

            Do you see the disconnect here? You are downplaying the situation. Many people like you are. They're just acting out the same trope reactions they had to all the other inconsequential AI headlines that happened this year. Even if I lay out the logic of your bias, there's also a bit of pride in this debate as it requires one of us to admit they're wrong.

            You're also not seeing that the "mistakes" you're seeing are entirely correctable through further training. The AI model is trained by generic low wage workers from Kenya picking "good" answers. Wrong answers that look right definitely get past the Kenyans. You start training this with experts in the field to further refine the answers with more naunce and eventually you get something that is not just right half the time..., but right almost all the time.

            • bamboozled 3 years ago

              No, I just don't find it unreasonable that it responds that way.

              It's statistics and it's not really "inventing" anything and it's not magic either and it's not hard to conceive why we get that answer. Again not trying to downplay it, it's a cool technology, It's just if you go calmly and think about it, it's fairly reasonable.

              If I search the internet, I'd see many, many people talking about the dangers of running `rm -rf /`, I also see examples of other errors ChatGPT has returned when that command was entered, such as "bash not found", which is also what others have experienced and I can find examples of that on the internet.

              So basically the response I was given would be weighted very highly on ChatGTPs list of things to do / say, based on the training data, or a 5 minute Google search.

              I'm not anti-ai, or ChatGPT, I'm just not into believing in magic when it comes to LLMs.

              If ChatGPT was magic, and didn't return that response, would it be useful ?

              • fckgnad 3 years ago

                There's no magic here. There's no claim that chatGPT is useful either. The claim is that chatGPT is a precursor to a technology that will replace many human jobs because it's showing tell tale signs of intelligence that is quickly catching up to human intelligence.

                I don't know why you're so focused on the rm -ff / thing. Like it literally just demonstrated awareness of a terminal shell, a filesystem, awareness of the internet, of apis and finally self awareness.

                That is a herald for the future. This is despite all the rough imperfections chatGPT has. You are downplaying it. I'm sorry.

                The whole statistical thing is a misnomer. If the output of chatGPT can be modelled as a statistical phenomenon then so can the output of the human brain. It's a distraction. What is going on here is indeed creativity, awareness and imagination, if you want to call it a statistical phenomenon that's fine, you're just saying all of intelligence can be modeled as a statistical phenomenon.

agentultra 3 years ago

> The kind of thing that you witness once in a generation. (The last two times were object-oriented programming and the World-Wide Web.)

Folks, let's get real.

I kind of snarked at this but then I realized it's written by Bertrand Meyer and the conclusion is spot on: I agree that should these tools proliferate it will highlight the importance of formal methods and verification.

Even if ChatGPT-like systems get faster and gain deeper models of computer syntax and structure, I suspect the one problem that will be difficult to solve is elegance and abstraction. Often the abstractions we choose are based on laws and ways of thinking that help us manage complex phenomenon using laws and notation which make it easier for us to reason about them.

And even if that does get solved some how, we're going to have to understand them some how. A stronger emphasis on proofs and model checking will be useful to anyone who wants to be sure that the program the FutureGPT produced isn't simply "some what right" but is actually right.

  • notimrelakatos 3 years ago

    > I suspect the one problem that will be difficult to solve is elegance and abstraction.

    The fact that this industry was able to sustain itself for so long and not actually get this is a miracle in itself. This is what ruined OOP when it became just managements lingo and it'll probably happen again with tools like ChatGPT, we are just going to vomit code in a faster (and cheaper) way.

    A few rebellious souls will soon realize that domain modelling and writing correct code is hard, they'll end in the usual places that already exist today, by working in some niche company that applies pure FP or proving theorems in Lean/Coq/Agda.

rajnathani 3 years ago

I feel that in the short run, all programmers will simply become more productive. But in the medium and long run, the narrow SE roles will become obsolete: Such as any SE role where there’s a “certification process”, eg: Cloud (AWS, GCP, Kubernetes), iOS, Android, even CCNA, etc.

A generalist SE (side note: like how I assume HN’s readership bends towards) will be the one to benefit, as a good generalist SE can have networking understanding (CCNA), Ops/DevOps (cloud), and SE (backend, frontend, mobile) and more (at embedded programming is where the line could be drawn, but ChatGPT is good at explaining hardware concepts too!), to thrive and depend on ChatGPT or equivalent system for discernible assistance.

(Minor: I even tweeted my opinion with my personal note about it too: https://twitter.com/raj_nathani/status/1615709768487948292?s...)

  • skohan 3 years ago

    I agree. I think we are getting close to where a machine can do "programming" but it will be a while before a machine can do "software engineering".

    I.e. maybe you will be able to use AI to solve a problem, like "write a lambda to to accept JSON in this format and put it in a database with this scema", or even potentially "optimize this function to run as fast as possible on a multi-core processor", but it seems we're quite far from something like "write an application with a GUI that looks like this which has the following functions".

    • karmakaze 3 years ago

      What I think will happen is changing of software development roles. They'll be split into things AI/ML can do and things humans will continue to do (using copilots). And the work that humans do will be constrained in ways to interface with the machine, as well as take the output from machines and complete integrations. We might even use programming languages that the machine handles better. So software development has humans producing/validating that then gets used by machines, and that output being used for final integration/validation. With these new formalized structures, AI [that ship has sailed] will train on it and replace more of the stack.

ilaksh 3 years ago

In case anyone is interested, I have started on a natural language programming web app based on OpenAI's API.

https://aidev.codes. By default it is like the OpenAI JavaScript playground, except it immediately hosts the results in a web page and allows you to edit the accumulated prompt.

Also can edit specific files. Many improvements planned. Tonight I got a lot of stuff done related to creating and searching knowledgebases using embeddings.

If anyone tries it, please consider writing any feedback in the Discord.

nixpulvis 3 years ago

Knowing what you want is both the fun and the hard part of most software development. That and understanding the environment you are operating on. Both of these things are not something I feel an AI has a strong advantage with given the fact that you need to provide this information.

Perhaps I can try to say, "Please decide what browser features I should use to maximize profit.", and it will actually do some proper research and contract out studies and all that... but I don't see that happening yet. Even still, the point of validation remains.

I can't help but draw a parallel with automated theorem provers. Sure they resolve "true", but then who validates the validators? It's a never ending cycle.

At the end of the day, you must learn to love and trust your tools and then stand by the joint creations.

  • auggierose 3 years ago

    In the case of automated theorem provers, they should generate a certificate together with "true", so their proof can be checked. And that is also the answer to checking AI: let them generate a proof that what they did was correct. Of course, that only works if you ask the AI for a theorem, but a lot of tasks can be framed that way.

prewett 3 years ago

I feel like there was so (informal?) theorem that a full-specification of a program is at least as long as the program. Since you have to tell the AI what the specification is, the worst that could happen is programming becomes more like Knuth's "literate programming", or maybe even better, all programs become formally verified. But that will just mean chasing bugs in your specification. It could eliminate implementation bugs, but it's not going to help with figuring out what my problem actually is. It also isn't going to be able to choose among the solutions, so I'll just end up making a bunch of choices. So maybe programmers become like technical managers, but managing an AI bot instead of a person. Which is sort of how it is today, except the compiler is pretty stupid and pedantic.

But even something straightforward: "write a hash function whose output evenly mixes the input" or "write a function to convert an RGB image to a dithered B&W image" is harder than it seems. How do I know the dithering algorithm chosen is good for my data? How do I know the hash function actually does what I asked and doesn't have some funnel somewhere? I'll have to write some tests, but I can't get the AI to write the tests, because how do I know the tests will be right?

gptgpp 3 years ago

So... If it's so revolutionary, why can't I get it to solve level 1 advent of code problems?

Like here is what it generates for the 2016 day 1 problem:

def find_distance(instructions):

    x, y = 0, 0

    direction = 0 # 0: North, 1: East, 2: South, 3: West

    visited = set()

    visited.add((0,0))

    instructions = instructions.split(", ")

    for instruction in instructions:

        turn = instruction[0]

        distance = int(instruction[1:])

        if turn == "R":

            direction = (direction + 1) % 4

        else:

            direction = (direction - 1) % 4

        for _ in range(distance):

            if direction == 0:

                y += 1

            elif direction == 1:

                x += 1

            elif direction == 2:

                y -= 1

            else:

                x -= 1

            if (x, y) in visited:

                return abs(x) + abs(y)

            visited.add((x, y))

    return abs(x) + abs(y)
This function returns 113 from my input for that day, which is actually the answer for part 2... For part 1 it should be 234.

When I tried in Rust the solution didn't even compile, which is business as usual as far as my experience goes for trying to get ChatGPT to write anything practical (not a 'toy' example) in Rust.

I gave it another chance with day 2 in python and it failed at that as well. These are VERY simple tasks, CHILDREN can solve the initial couple days of advent of code.

In this article they give an example of a square root function. Maybe the authors could consider trying some more realistic tasks? So silly...

  • rcme 3 years ago

    > So... If it's so revolutionary, why can't I get it to solve level 1 advent of code problems?

    Because it's a nascent technology that hasn't been optimized for solving advent of code problems. It can, however, do a lot of other cool stuff.

    • gptgpp 3 years ago

      Except it should have been trained on probably tens/hundreds of thousands of 2016 advent of code solutions posted on github and other sites, shouldn't it?

      It often starts hallucinating the input in the exact format advent of code gives, so I'm like 99.9% certain it has been trained on a large corpus of advent of code.

      Personally I use codex sometimes for debugging help so I agree it can do cool stuff. I just disagree it is "almost" right at solving problems -- it almost never generates code that even compiles for me when prompted to do anything not super trivial like advent of code brain teasers.

      • rcme 3 years ago

        What is in the training data and what the model was optimized to do are two totally different things. And even then, tens of thousands of solutions may not be enough to train the model for this specific problem domain.

  • programmarchy 3 years ago

    There’s evidence that it can already solve more difficult problems when given the right prompts and constraints.

    https://github.com/openai/openai-cookbook/blob/main/techniqu...

  • tedsanders 3 years ago

    Seems pretty clear that this question was in its training set and it's regurgitating the answer for part (b). Seems far too coincidental to accidentally get the correct answer to wrong question.

    For me, it solved part (a) perfectly when I told it: "To solve this, write a Python 3 function that takes a string like `"R4, R3, R5, L3, ..."` and outputs the number of blocks to Easter Bunny HQ." The original question on its own was a bit ambiguous in my opinion because it doesn't explicitly contain the input which the user reads on a second page.

    In any case, neither is strong evidence for or against its ability to solve problems like these. First, it's N=1. Second, it's a problem from its training set.

    For me, Copilot/ChatGPT adds value not by replacing my programming but by (a) writing simple code for me and (b) answering my questions about things I don't understand. I operate in a supervisory role where I have to double check everything it says. But, critically, it's faster for me to double check its work than to do everything myself.

    • gptgpp 3 years ago

      I mean, it's not N=1 though. Fails day 2 as well, and a bunch of other tasks I've tried to give it. It's weird how some of you are responding that I've cherry picked a single example, I've done a ton of stuff with chatGPT, you can check my comments on prior experimentation with stuff like mathematics and basic problem solving too. Probably spent like 20 hours with it, total?

      It genuinely fails 100% of the time at coding anything non-trivial for me, and about half the time for simple stuff. Glad you've been having success though, maybe some people are just better at getting it to work, or it has certain domains it excels in, or your tasks are fairly simple.

  • skohan 3 years ago

    Well presumably it will get better, and it will get better at an accelerating rate.

    • bamboozled 3 years ago

      Not trolling, but I'm actually curious how it gets "better" in this case? I mean was it ever meant to actually code?

      As far as I can tell, until it actually understand what it is doing, it's just kind of "blending" what it thinks the most common response is based on thousands of other similar responses to similar questions.

      I can imagine people tweaking it down to be more "right" in some cases, but then won't it just become more wrong in other cases?

      I'm actually starting understand why AI is good at generating pictures, statistically it's just flipping bits to look like other bits it's seen relative to the input specified. Code on the other hand is something which needs/should to be more precise.

      There's also the fact that the more people lean on this tech, the more mistakes will be perpetuated into the system and the less samples it will have available to learn from, as people are no longer feeding it new answers.

      I guess like how DeepMind trained AlphaGo, it can code itself to learn, but I I do imagine the problem space for it to "play itself" against is practically infinite, even compared to go, the game, which is also a huge space.

      I'm a software person, not an AI person, but I love thinking about it.

    • gptgpp 3 years ago

      So it will go from generating toy code that usually compiles, to being able to one day reliably solve day 1 advent of code brain teasers, to generating useful software?

      Is there domain limitation to this growth and performance? Medicine, theoretical physics, art, engineering, pure/applied maths, etc.?

      I don't see how you guys are getting this from the current tech? Maybe there is an educational resource someone can suggest?

  • fckgnad 3 years ago

    You give an example about how chatGPT is wrong while there exists many examples about how chatGPT is right. And you think the some wrong examples invalidates the possibility of AI ever being better than you?

    The fact that it's often right is a horrifying omen of the future.

    chatGPT will not replace you. It is the precursor to the thing that will replace you.

    • gptgpp 3 years ago

      Are you seriously accusing me of cherry picking? Get it to write you an MD5 hashing algorithm in Rust. Go ahead, I'll wait. I tried and it genuinely couldn't, I asked it tons of different ways and wasted a ton of time before I had to go do it myself lol.

      Cool, man. So why don't you get chatGPT to start writing you some software? Or optimize an algorithm? Hey, maybe it'll tackle the travelling salesman problem in polynomial time!

      SO many economic and scientific opportunities that will make you wealthy and famous if it's as capable as you claim (eg. Doesn't just solve elementary problems by regurgitating shitty code).

robbomacrae 3 years ago

Maybe I'm overly optimistic but I see these tools as empowering us all to be more productive and become directors who specify requirements rather than spend time doing the work. ChatGPT's ability to write documents for certain things that would take a while to research myself is an obvious example.

For Software Engineering I look forward to stopping writing the majority of my code and instead managing this tool and helping guide it to create apps and websites faster than I can and outside the areas of my expertise.

brianjking 3 years ago

I think at its current state ChatGPT or any GPT3.5 or similarly trained transformer tool can empower someone who cannot code (kind of me) but can understand the logic and prompting that goes into it to get a lot of new things done. I'm a Product Manager who has written some code in the past, but not for many years in a professional setting anyway. I'm quite technical, but not specifically in the code anymore.

In addition, it can help individuals and teams learn/debug/ship things quicker - which is unfortunately/fortunately something that every company wants and needs.

If it does what some think, and it may, UBI can certainly be necessary for some tasks if you consider diffusion tools like Dall-e and StableDiffusion as well.

nostromo123 3 years ago

Yeah, wake me up when it can do close to the metal programming on an ARM to fulfill real time requests from an industrial robot, for instance...also, in safety critical systems, as a bonus.

I'm not saying that it will never happen, but if your job is threatened by ChatGPT right now, you were not really doing real software engineering anyway.

systematical 3 years ago

Some years ago (perhaps 2015) I told a non-programmer that MOST programmers would be obsolete by the end of the century because of AI. Surprisingly he scoffed saying there is no way AI will be able to handle all those business edge cases. The stuff in the article is pretty basic and a long way from complex business logic, but we are on our way. I just hope business application developers like myself can survive until retirement, which for me is sometime around 2050. I am getting less optimistic.

tluyben2 3 years ago

See it as augmentation and not a threat. People who went ‘into programming’ and learned the basics and then stood still will have issues, but chatgpt still needs senior or talented developers to do anything useful for now (like juniors, you need to tell it things step by step). But then it’s a great tool, saving a lot of time and money. Just wished it was not owned by a few rich peeps and that training costs will drop fast.

quantum_state 3 years ago

This thing would make garbage-in-garbage-out a lot more wide spread … accelerating the ruin of info tech …

  • bamboozled 3 years ago

    Will be fun when something is created like ChatGPT where anyone exploit peoples software en-masse, will be a nice new industry around this.

    Could actually be the end of IT as we know it.

jasfi 3 years ago

This is part of what I'm working on for InventAI (https://inventai.xyz). It'd be great to get into YC, but I'm a solo founder! Any potential co-founders out there?

gsatic 3 years ago

Pretty sure the outsourcing shops will use it to inject more mountains of shit code into everything. But I guess thats not a bad things for the employment of consultants who will be called in to fix things.

kaba0 3 years ago

It can’t even solve like day 4 in advent of code, and only solve the previous ones because the problem can be found solved on the internet verbatim, with a different question attached.

So, nothing, I guess.

squidbot 3 years ago

I feel pretty secure in my job until ChatGPT can write itself. Once it can improve on itself, then I'll be really worried.

mathteddybear 3 years ago

Takes some skill to spot that n(n+1)(2n+1)/6 is not quadratic, doesn't it?

fizzbuzz69 3 years ago

ill still leetcode just ffs.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection