Settings

Theme

Magic AI Secures $117M to Build an AI Software Engineer

maginative.com

45 points by djcollier 2 years ago · 91 comments

Reader

caesil 2 years ago

Maybe I'm naive, but I just don't feel threatened by this at all. As a software engineer, I'd love to have an AI engineer automate the boring stuff so I can work on higher-level architecture concerns.

  • HPsquared 2 years ago

    You're already "inside". The danger with automation is that it can pull up the ladder of developing junior talent on nuts-and-bolts, especially if these platforms aren't accessible to students and small-time developers.

    • calvinmorrison 2 years ago

      Architecture doesn't matter anymore. I can have the AI do general purpose computing. Who cares if its slow? Integration work between two systems? Why bother writing APIs when my robot can spawn 1000 tabs and do manual data entry. Eventually they can just speak English to each other

      • javajosh 2 years ago

        >Architecture doesn't matter anymore

        That is...exquisitely false. Especially under the tensions of scale, architecture matters a great deal. Yes, computers are ridiculously fast these days, but it is quite easy to architect yourself into a situation that cannot be solved by more hardware, not even theoretically. There are plenty of problems that would require more memory than there are fundamental particles in the universe if you model them wrong.

        As for your offered examples, you seem to be thinking in terms of glue code and how well it could be replaced by an intelligent agent with its attention focused on a browser, reading one tab and entering data in another. This approach is going to break at any kind of scale greater than 1, and probably be quite brittle at 1. I'd argue that such an agent, if properly trained, will quickly realize that the cost/benefit ratio of it doing the job manually is not nearly as good as if it wrote a program to perform this mechanistic task. In which case, architecture still matters but the agent doing the architecting has changed.

      • mplewis 2 years ago

        What general purpose computers have you chosen for this task?

  • kypro 2 years ago

    How many architecture does a company need? Realistically if they succeed you'll be unemployed.

    • rglullis 2 years ago

      Great, so it will be super fast/cheap to build more companies.

      • azinman2 2 years ago

        The people and skillsets to successfully run a company have little overlap with strong software engineers. It happens, but no where near the numbers of engineers out there.

        Plus if you kill off many engineers jobs, who is left to buy your products?

        • rglullis 2 years ago

          Let's go all back to trade school. I hear that plumbers and electricians are making a killing nowadays.

          • blibble 2 years ago

            what will happen to their wages when all white-collar workers take up that career?

            • rglullis 2 years ago

              They can go back to work in the farms, or weave baskets, or pottery...

              Still not good? Ok, maybe this is too radical but hear me out: how about we take this incredible increase in productivity as an opportunity to start paving the way for a true post-scarcity future that can benefit most people, eliminate intellectual property and take everything produced by robots to fund UBI, instead of worrying about our cushy jobs and (relative) wealth/status obtained through bullshit jobs?

              • blibble 2 years ago

                > They can go back to work in the farms

                yes I think feudalism 2.0 is how this will probably turn out

              • moshun 2 years ago

                I enjoy having drinks with people who have also come to this conclusion.

              • mplewis 2 years ago

                Oh cool, when do we get to that future? What’s the realistic path from today to there?

                • rglullis 2 years ago

                  Glad you asked. Here's my very short list:

                  - Get rid of corporations by adding a cap size to the maximum number of people employed by a company. Corporations make sense when we need to coordinate large groups of people around a common project, and when we need the maximum efficiency from each individual. Nowadays most people's work is coordination and the actual labor is done by machines. Getting rid of corporations will lead companies to be smaller and more numerous, and we reduce the complexity of overall coordination needed.

                  - Patent / Copyright law reform. Anyone wanting to have some type of IP protected must put in some type of insurance bond. If (e.g) a book author wants to receive $100k for their work, they need to pay into the system to ensure copyrights. Once the value is reached, the bond can be reset (for a larger amount) or it goes automatically into public domain. The higher the amount desired and the longer the policy is in place, the more expensive it becomes to hold it. For software, there is an additional clause: the source code must be made available 3 years after the first release, failure to be able to reproduce the object code from the source code leads to an automatic fine equal to the value of bond.

                  - Make tie-in of sales and services illegal: if Amazon wants to provide a software-based service (AWS) then the software should be able to run on any commodity networked computer. If Apple wants to sell a hardware device, people should be allowed to install whatever operating system they want.

                  Now, please don't respond with "it's not realistic", because it very much is. There is no "boil the oceans with a candle" here. They may be difficult, but very well within the realms of possible. For my first point, one common objection is "if you limit the size of the company, then you'll have people being employed as contractors" and my response is "you can write the law in a way that anyone that spends more of 50% of their working time working for the same client should be counted as an employee.

    • highwaylights 2 years ago

      I mean, yes, but also management is a wayyy less skilled job that’s far easier to automate. That goes away shortly after we get a competent AI engineer.

      Business analysts are also incredibly vulnerable - why have a middle man if the machine understands your requirements in English/French/whatever?

      • mjr00 2 years ago

        > I mean, yes, but also management is a wayyy less skilled job that’s far easier to automate.

        This is a pretty wild take. The job that is 99% dealing with human interactions is easier to automate than the job where you make a computer do what you want?

        • Jensson 2 years ago

          Yes? Customer support is basically solved by Gemini 1.5 in its current state, but it doesn't replace a software engineer.

          These transformer models are much better at soft skills than hard skills.

  • cosmodisk 2 years ago

    Even without any AI, every senior I know( 10+ years of experience) is pretty much trying to shift to architect or similar position and sees day to day programming almost on the same level as sweeping floors. I appreciate this may not be the case in shops doing something truly unique,but in an average CRED app world, this will happen much sooner than a lot of people might think.

  • ponector 2 years ago

    It looks like with ai tools there will be less demand for engineers. For example, instead of keeping 300k people around, Google will replace 200k of them with ai-engineer ai-coder ai-tester.

    Can you imagine how depressing it will be to your market salary?

  • wakawaka28 2 years ago

    In reality you'll be the one figuring out the boring stuff that the AI can't do.

    • herbst 2 years ago

      Yeah. Integrating the 99.9% correct code into already existing systems sounds like the software job of the future.

      • l33t7332273 2 years ago

        Frankly it sounds not far off from the software job of the present. The correct code is largely the easy part in my experience, but this could be because I struggle immensely with learning each company/department/projects esoteric CI/CD pipelines

        • wakawaka28 2 years ago

          Well we don't use AI now, so most code is written by humans. CI/CD can be hard but it's not actually harder than anything else overall. If you think it's hard for an ordinary software project, I'd bet it's either not your field of expertise, or you're dealing with a system that was probably set up by inexperienced people.

      • wakawaka28 2 years ago

        It might not actually get that effective, but it's certainly wrong to expect to only get fun stuff to do. The AI might actually have decent output when it comes to architecture too. I personally haven't seen such amazing AI but I could accept that it might exist now or in my lifetime.

      • HPsquared 2 years ago

        That, and requirements capture, and prompt writing.

        • wakawaka28 2 years ago

          When AI gets good enough to really displace programmers, it probably won't require as much supervision as it does today. It might feel more like the AI is prompting you instead of the other way around... But idk, it remains to be seen just how good it will actually get.

  • quonn 2 years ago

    I don‘t quite see why the high level architecture couldn‘t be done by AI either, once it get‘s good enough to build the low level one (which is what software development really is). Half of architecture is best practices anyway, which are explicitly in the training data, and the other half can be inferred from implicit principles in existing systems that are also in the training data.

  • elorant 2 years ago

    As a software engineer, no, there's no reason to be worried. But what if you were a wordpress developer where a big chunk of your work can be automated?

    • karolist 2 years ago

      Do you seriously think WP is just about blogs? I'd say majority of WP sites are now ecommerce, the amount of different plugins interacting based on local markets needs with custom tweaks, integrations to local courier APIs, payment systems and the general fudginess of the codebase I'll be very surprised if this can be automated. Your general FAANG proto pusher SWE will be the first to go.

      • elorant 2 years ago

        Majority of WP sites are definitely not e-commerce. There are about 160M active domains and WP is estimated to run on 40% of the web. That's a whooping 60M+ web sites. I doubt more than 10M of them are e-commerce ones.

    • caesil 2 years ago

      Honestly if you're letting yourself sink that deeply into a niche, and neglecting broader programming skills such that you're unemployable outside it, you have no one else to blame when that comes back to bite you.

  • endisneigh 2 years ago

    If anything you should want the opposite. Most companies are fundamentally solving problems within a finite set of domains which could be represented with a finite set of "good" architectures.

    The opinionated thing is how to implement these within the boundaries of the existing codebase, skills, etc.

  • qeternity 2 years ago

    You shouldn't feel anymore threatened by this than by interchangeable parts. People upskill and society ends up as a massive net beneficiary.

    Until we have ASI (imho we're not remotely close) then there is plenty of work to be done. It will just involve fewer menial tasks.

    • unraveller 2 years ago

      It merely depends what we point the AI at whether we get full blown software engineering out it. I mean right now people only point it at tasks that can be written/watched, reasoned about easily, and then verified. There probably needs to be a jump to the visual spectrum like a wireframe overlay for code/projects which will allow humans to quickly verify what it is "thinking" architecturally. There just needs to be a lot more imagining of ghost features and trade offs somehow. People barely comment code let alone speak of all things that are being done away with.

  • echelon 2 years ago

    The journalists, artists, and film people felt this way too.

    These tools will be incredible and change how we do work forever.

    I've done the kind of heavy hitting active-active, five nines engineering you'd think would be safe. I'm not so sure that doesn't change eventually.

  • bearjaws 2 years ago

    I wasn't worried until Google showed up with a 1M token context with 99.9% recall...

    • logicchains 2 years ago

      Some random team at Berkeley beat them to it by a couple days: https://largeworldmodel.github.io/ . It's just a matter of throwing compute at it, nothing fancy. OpenAI could probably do 1M token context too but they haven't yet found a way to make it profitable (neither have Google; the most they actually offer customers is 256k).

      • x86x87 2 years ago

        This brings a very interesting question: if you could have an AI software engineer today but it would cost you 1 trillion dollars, would you want and be able to afford it?

        There is a reason why we still have people working at McDonald's even though fully automating it has been possible for a couple of decades now.

        • jjmarr 2 years ago

          It's the same reason that for 80 years after the invention of the commercial icemaker in 1842, the American ice-harvesting industry produced more frozen water than manufacturing plants. And the ice trade did not exist until 1806.

          https://en.wikipedia.org/wiki/Ice_trade

          It was more economical to send people out to cut ice from a lake in Maine and ship it by rail to Chicago than it was to just freeze water from a local supply. It was also more reliable since the technology was mature, versus ice plants that often broke down when meatpackers needed a consistent supply.

          There's no reason why this won't be the case for AI unless semiconductor manufacturing continues its exponential performance/cost growth. The demand for technologically obsolete goods and services do not instantly disappear when a superior product enters the market.

          Human software engineers right now are more reliable than AIs for most price-points. This is true for most industries in which machine learning is present.

        • tarruda 2 years ago

          > cost you 1 trillion dollars

          How did you come up with this number? It seems pretty unrealistic.

          > There is a reason why we still have people working at McDonald's even though fully automating it has been possible for a couple of decades now.

          Maybe the low salary is the reason? If it is a bit more costly to automate certain aspects of manual labor, then the low salaries might remove the incentive to do so. This is not the case for software engineering.

          • toyg 2 years ago

            Beyond the snark, this is basically it. It's the same reason the Roman Empire, despite all its technological prowess, never tried hard to automate relatively low-hanging-fruit tasks: because slaves were cheap, plentiful, and more flexible ("reprogrammable") than anything mechanical could ever be.

            If it costs $1m p/y to run a machine that cooks burgers and fries, or $30k for an employee who can do that _and_ cover something else when someone else is ill, it's a no-brainer. But businesses had to discover that the hard way; until the 80s, most people were still convinced automation would win everywhere, because it had won (and won big) in manufacturing. A combination of factors, from the '80s onwards, made labor costs effectively fall, which created our reality where certain jobs are so cheap that automating them makes no sense.

            The "problem" is that, in certain regions, software development costs reached a point where automation looks very, very appealing. If a machine costs 500k p/y to replace a few 150k p/y SWEs without all those pesky employment complications, businesses will happily choose "AWS AI CloudDeveloper"...

            • tarruda 2 years ago

              > If a machine costs 500k p/y

              Do you mean an AI programmer would cost $500k per year? If so I think you greatly overestimate the cost.

              Recently I did some text processing with GPT-4 turbo (128k context) and I reached the daily limit of 5 million tokens. IIRC it cost me around $70 bucks for the day.

              I think $70 is the hourly rate of a SE with $150k salary working 40 hours per week. Note that we are at early stages with this tech, it will probably only get cheaper from here.

              • BandButcher 2 years ago

                "IIRC it cost me around $70 bucks for the day."

                Sure, for you that was the price. Enterprise cost would be way different.

                "Note that we are at early stages with this tech, it will probably only get cheaper from here."

                Haha people who pay for these ai tools can only hope...Ask any cloud provider, streaming service, or utility company if their prices are cheaper now than before.

                As these ai tools get better, they will require more resources to run (according to altman's 7 trillion dollar request) and most likely drive up the costs.

                But hopefully you are right though, as i believe we as humanity would be best served spending as little money and resources as possible on AI.

              • toyg 2 years ago

                > If so I think you greatly overestimate the cost.

                I suspect you underestimate it. Raw engine cost is one thing; what businesses downstream will actually pay, is another. Look at AWS: a lot of businesses don't even touch it directly, their vendor ISPs do. If "AIDev" really becomes a thing, businesses will buy specialized services (e.g. "ApiBuilder.io", "YAMLCrusher.io", etc etc), which will obviously command a premium on top of top-tier, 5-9s guaranteed, "raw" ml engines.

          • calvinmorrison 2 years ago

            Aw look, hn is reinventing capitalism from first principles!

            • x86x87 2 years ago

              Fwiw my question was rethorical.

              And it was meant to highlight that even if you have the tech (which we don't - the cheap tricks chatgpt or copilot do are impressive but still cheap tricks - are super expensive when it comes to actually training the models) it may not make economic sense to deploy them.

              Even if it makes sense to deploy them the social unrest and volatility that will result in society may not end up well. (What's the point if all the consumers go away or they cannot actually buy the shit you're producing)

      • jjmarr 2 years ago

        GitHub CoPilot charges $10 for a subscription that it loses an average (!) of $20 a user on.

        https://www.theregister.com/2023/10/11/github_ai_copilot_mic...

        "Make it profitable" appears a secondary concern in the AI space.

        • cosmodisk 2 years ago

          I started using it recently. $30, 50, or even $100 a month is litterly nothing for most companies in wester world. They'll hike up the prices eventually.

      • bearjaws 2 years ago

        I believe that is where they are implying they do it without increasing memory utilization dramatically.

        If 1M context uses 32x the memory of 32k, its a non-starter. Even a smallish LLM like Mixtral uses 4-8gb of memory just for your prompt. You would have 256+GiB at 1M...

      • tarruda 2 years ago

        > It's just a matter of throwing compute at it, nothing fancy.

        I read somewhere that there was a recent breakthrough that enabled this.

        Even if it costs a lot to run inference with 1M token context, it is hard to imagine it would cost anywhere close to a software engineer salary.

    • pton_xd 2 years ago

      GPT4, as smart and impressive as it is, starts forgetting or confusing key instructions with as few as 500 tokens (in my experimentation). Practically speaking the advertised 32k context window could be a few orders of magnitude smaller depending on what you're asking it to do!

      • tarruda 2 years ago

        Did you see the Gemini 1.5 1M token demo? They upload the three.js library and ask it to make changes, which it does successfully and pretty quickly.

        I want LLMs to fail at my profession as much as everyone at risk of losing their jobs, but unless Google is lying, things are looking pretty grim.

        https://youtu.be/SSnsmqIj1MI?si=N0zYY_Zbbfz3KRWK

        • pton_xd 2 years ago

          Yeah, LLMs seem exceptionally good at summarizing large amounts of structured data with a prompt at the end, like that YT demonstrates.

          If you have a back-and-forth conversation, with the previous conversation chunks prepended as context to the next interaction, it will rapidly lose track of where you instructed it to spend its attention.

          The manner in which the context is used seems to make a huge difference.

        • betaby 2 years ago

          I didn't see Gemini, I saw other demos only to learn later on that they were staged.

          • Workaccount2 2 years ago

            Google seems to have learned from that demo this time and made very plain and upfront demos.

          • tarruda 2 years ago

            I hope that you are right, and that they faked the demo to have a short term pump on stock price.

  • TheLoafOfBread 2 years ago

    I am feeling the same. Paying 100EUR/month for a competent AI junior programmer which I could offload simple or repetitive tasks on would be absolute banger. But so far AI is hype and crap code.

  • xanderlewis 2 years ago

    Those who do worry do so because they assume (rightly or wrongly) that it will be possible to automate everything you do, including ‘higher-level architecture concerns’.

  • jbandela1 2 years ago

    I don’t think you’re naive.

    Providing detailed instructions to computers to accomplish human/business objectives is the hard part about being a programmer.

    But the level of abstraction has been increasing. It started as physically flipping switches then machine code then assembly then structured programming then object oriented programming and so on.

    I remember during the 1990s with objects and VB custom controls people were talking that businesses would just hire a bunch of high school students to work part time just snapping components together like Legos.

samsk 2 years ago

Yeah, lets imagine that magic:

Prompt: AI, production is down. Fix it. AI: working...

3 days later.

CEO: Hey, dear contractor, our Production is down for 3 days - can you fix it ? Contactor: Sure, give me $500/hour and within 3 weeks it might work again. You know, you have 10mio SLOC, 1mio npm dependencies so it will take a 'bit' longer...

Yes, its bit oversimplified, but imagine it ;)

  • mjr00 2 years ago

    It's easy to imagine because it already happened in the mid-late 00s with the first attempt at software dev cost reduction when companies tried outsourcing to cheap, underqualified labor in India, Bangladesh, etc.

    Being the person who comes in to salvage a software disaster is a hell of a different career than a greenfield cloud startup dev or FAANG proto-pusher, but it can be extremely profitable. and yes, I suspect the demand for that type of software expert will be on the rise.

htrp 2 years ago

2022 founded

145mn raised to date.

No demo at all.

  • highwaylights 2 years ago

    I’m not saying they’re not vapour, but the article does mention a series of closed door demos and thousands of GPUs deployed, so it at least appears that someone has seen something.

    • toomuchtodo 2 years ago

      https://en.wikipedia.org/wiki/Magic_Leap

      https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

      Healthy skepticism is pragmatic. Sequoia and FTX comes to mind as well. Don't assume an adult was in the room doing due diligence.

      • arbuge 2 years ago

        That's not the same Magic...

        • toomuchtodo 2 years ago

          Different product categories, same vaporware/hype train potential. How good would the autocomplete demo have to be for someone to convince the unsophisticated to hand over $500M (based on a preponderance of the historical evidence)? That's the thesis. Not disbelieving the potential of the technology (considering Gemini 1.5 and Sora recently), but you're going to have to be fairly sophisticated to know the difference between what is real, what is show, and if it's going to have legs to carry a valuation vs being a flash in the pan.

          (have done technical due diligence for M&A, but also realize a greater fool can be found; ask lots of questions, and ensure the people you're trusting to validate the responses to those questions have the necessary domain expertise)

    • mplewis 2 years ago

      Clinkle raised $25M too.

  • campers 2 years ago

    There's a short demo from June last year https://magic.dev/blog/ltm-1

  • karolist 2 years ago

    Yet there were times when demos were attached as CDs to magazines.

  • threeseed 2 years ago

    > No demo at all

    This is how the majority of companies actually build and release products.

    You do demos behind closed doors to investors/board before showing the public.

johnwheeler 2 years ago

I’m in the Penrose camp who thinks understanding is the result of quantum activity. LLMs are really good at making associations, but they don’t make “connections” in the sense that they’re able to integrate an understanding of the world to contribute to it in an original way. Also, the power efficiency of the brain relative to GPU farms. It just makes me think we’re no where even close.

I certainly hope that’s the case. I really like programming and building things for money, fun, and status.

hkt 2 years ago

We shouldn't be surprised when capital tries to eat its children.

It'd probably be best if we all learned some agriculture and opted for simpler lives, rebuilt our social capital (aka community) and learned to do leisure like sane people. The rat race has never thanked anyone for participating, not even software engineers.

  • javajosh 2 years ago

    Opting-out isn't an option when capital becomes capable of deploying (robotic or human) cops and soldiers to any point on earth in 30 minutes. At that point, the real power rests with whoever (or whatever) decides how to allocate capital and those who've opted out live and die only at their whim. Note that legal and political frameworks only need to be neutralized, not eliminated, for this to be effective. One can neutralize the legal system by injecting complexity and pricing out ordinary people from using it, and share small amounts of capital with legal leadership. One can neutralize the political framework by spending considerable capital on manufacturing consent, and improving return on that investment with technology.

    Of course, such a solution isn't impossible. It may be that such a village, if left alone by circumstance and/or because it has an insider champion, may create such psychologically healthy and intellectually talented people that they may eventually challenge the greater status quo. Sounds like the premise of a YA mild dystopia novel!

globalise83 2 years ago

As an open source contributor not only did you give your software away for free but you also trained the AI robots who will take away your day job?

  • mplewis 2 years ago

    No open source contributors consented to their work being used for this purpose.

darth_avocado 2 years ago

Just wait till the AI has to update npm packages. I think I’m safe.

  • greggroth 2 years ago

    Updating dependencies and frameworks is maybe the biggest thing I can't wait for Copilot to do for me.

  • x86x87 2 years ago

    Just wait until ai invents its own npm that people are no longer able to understand. Good luck with that.

    • darth_avocado 2 years ago

      But AI is trained on data that exists right now. The more likely scenario is that it builds its own npm that is similar to npm and still sucks.

    • karolist 2 years ago

      Some say that node_modules directory is the heaviest object in the universe. All we know it's called the Node JS.

      • BandButcher 2 years ago

        No wonder i was feeling a strong gravitational pull running npm install.

        I'm afraid if i run a 'rm -rf' command i might take out the entire electrical grid!

quonn 2 years ago

No matter which side of the debate you fall on, it would be interesting to discuss the practical impact and timeframe of any AI software engineer.

I think a realistic timeframe is about 15 years from the time someone releases a viable tool to complete replacement of the last engineer. About half way through the pressure on salaries will become noticeable. I base this guess on how companies typically operate and how long practical adoption of smaller technology changes take.

In the first half there should be both a downward pressure due to the threat of replacement but at the same time an upward pressure for the same reason. New grads will stop coming and existing engineers can look forward to being phased out and command a premium for as long as they can.

Another possibility is that it doesn‘t quite get good enough, but regardless students start picking other subjects, creating a temporary shortage that sustains existing employees. Employees will just be reduced through retirement with half being retired after 20 years of this process anyway.

Yet another possibility is that a half-assed tool only reduces demand - again that can be fixed through retirement plus less or no new people joining the field.

  • BandButcher 2 years ago

    Im still fuzzy on this whole ai thing (as a dev)

    Would our industry create a new job role such as "prompt engineer" aka "ai engineer supervisor" ??

    Or would this ai developer be able to read jira/kanban tickets, cooperate with other "teammates", deploy fixes, etc with no major oversight?

    Generally curious.

mplewis 2 years ago

Yawn. The bubble is popping.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection