Tell HN: As AI gets better, are scripting languages losing their appeal?
As AI keeps getting better and better at coding, I kind of feel that using scripting languages is starting to lose its appeal.
For example, why create a Python Lambda that has a large cold start, is slower, and ends up costing more, when I can build the same Lambda in C++?
The same applies to bloated Electron applications, I can build the same thing, and often something even better, with far more efficiency using native code. > For example, why create a Python Lambda that has a large cold start, is slower, and ends up costing more, when I can build the same Lambda in C++? So that you can have less code, which is more readable and focused on the actual task. Auditing code matters if you care about code quality at all, which you had better if you want to build significantly beyond what the AI can one-shot. > The same applies to bloated Electron applications This is actually completely different. I agree with you on avoiding bloat. Efficiency is not the opposite of bloat. the compile loop still slows down an LLM, so like always it's mostly project dependent. What you want are the errors a compiler provides for guidance to the LLM, and ideally no compile loop; so if your project supports it python + a linter that is hook'd into the LLM for deterministic firing is a very solid choice for lots of LLM projects. as to the Electron application : I sort of disagree as far as LLMs are concerned. Once your program takes care of as many things as Electron does under-the-hood your LOC is going to be fairly large and that metric itself is going to be the main thing that makes the codebase less and less compatible with LLM usage. Generally speaking it's best to stick to really modular non-monolithic codebases and make a best attempt at keeping each modular small and well documented. Once you get some huge monolith somewhere in the codebase LLMs will start losing themselves in unpacking it and the quality of work will drop elsewhere as context is eaten. RISC-V via librisc implies a lot of untapped potential. There's C/C++ and a bunch of others usable inside that. Python/Ruby/Erlang et all still have a lot of horsepower so I'd not discount them yet.