Scripts should be written using the project main language
joaomagfreitas.linkWhat it really seems like the issue is is:
1. Scripts should be maintained and tested, and
2. The language used for scripts necessarily is and should be treated as a project language, and the appropriateness of that choice should have all the factors that go into choosing a langauge for any other purpose, including the impact on complexity if it isn't the main language and fitness for purpose and any additional dev platform, tooling, etc., constraints it imposes, but...
This doesn't imply scripts should be in the main project language, any more than it is generally the case that projects must be monolingual.
Much more nuanced, thank you! For one thing, you need a language to bootstrap your environment in dev envs and CI. You wouldn't want to write a Python script which has to support a bunch of minor versions (with vastly different capabilities) just to run `poetry install`.
If you’re using a C++ project saying that all of your scripts related to that project must also be in C++ when Python would do fine is ridiculous. You should just pick a reasonable language
I do that in my Go projects.
In fact my "scripts" are actually part of the main executable. I use cmd-line args to invoke the needed functionality.
For example, in the past I would have written a Python script to deploy my Go binary to a server, possibly using tools like Fabric that provide functionality to make it easier.
Today I add `-deploy-hetzner` cmd-line to my Go binary and it does the work. It builds itself, copies the binary to the server, kills the old instances, configures caddy if needed, starts newly uploaded instance etc.
For example my deploy.go is 409 lines of code, which is not that bad. You can see exactly how this works: https://github.com/kjk/edna/blob/main/server/deploy.go
I standardized on how I deploy things so deploy.go is mostly re-used among several projects.
Writing this code isn't much more difficult that what I used to write in Python.
This kind of code can be shorter because I don't have to handle errors, I just panic if something goes wrong.
I like that I don't have to switch between different languages and that I have full control and understanding over what happens. Fabric used to be a bit of a black box.
I even wrote an article about this idea: https://blog.kowalczyk.info/article/4b1f9201181340099b698246...
Im a big fan of //go:build exclude ... and having separate CLI scripts in a project where I can.
Candidly I think that it's much easier to do this in go (or rust), rather than say python/ruby/node as I can use complied binaries without needing a run time.
Edit: //go:build ignore is idiomatic
Go’s excellent for writing development tools and “scripts” if you have to support development on heterogenous platforms.
No per-platform install instructions, no “ok but you have to run this in WSL2… you don’t have that? Oh god, ok, let’s find those instructions…”, no “oh fuck the flags for that are different on BSD and Linux, so this breaks on macOS”. “Wait the command’s python3 not python, on your system? And also it’s erroring even after you fix that? Shit you’re still on 3.8, we use features that weren’t included until 3.10…”
“Unzip this single binary and run it. Tell your OS to trust it if it hassles you. That’s it.”
C# with mono might be close to as good? Not sure. Rust’s probably OK but a bit of an investment when you’re just trying to dash off some quick tool or script. C++ and C can do it but you definitely don’t want to. Java’s obviously out (ok, now update your JRE… wait, your env vars are all fucked up, hold on…).
Go’s also highly likely to be something another developer with or without Go experience can read and tweak after the person who wrote it leaves, without much trouble.
Why use build tags for this over ./cmd/$toolname/main.go?
Do build-excluded files get tested with `go test`? (tbf, not sure it's practical to test most scripty tasks)
Think of this as a way to have a file with package main, and a func main that does not interfere with your normal build process...
The best example of this, and a decent util is this: https://go.dev/src/crypto/tls/generate_cert.go
All of the real testing happens elsewhere this just provides utility.
This seems like a terrible case of not separating concerns and needless optimization. It may work for you, but what if someone who does not know Go but wants to use your project on a provider that is not hetzner?
> It may work for you, but what if someone who does not know Go but wants to use your project on a provider that is not hetzner?
Well they'd run into the same problem if the script was Python, wouldn't they?
And Go is ultimately far far easier to read, modify[1] and debug for someone who doesn't know it than Python is.
[1] For example, adding a third-party library to handle new cases. I've never had a good time doing it with Python's various package management, but Go seems pretty good at it.
> Well they'd run into the same problem if the script was Python, wouldn't they?
No, they would just use whatever tool is more appropriate for the job.
Fabric would be fine, ansible would be fine, salt would be fine, putting it on Docker and deploying via compose would be fine. None of these tools force you to learn the implementation details of the application.
> Fabric would be fine, ansible would be fine, salt would be fine, putting it on Docker and deploying via compose would be fine.
That still leaves them the problem of "If I don't know this tech stack, what do I do?"
You are working on the assumption that everyone already knows Fabric, Ansible, salt or Docker.
Whichever poison you choose, people who don't know that particular poison won't know how to install to a different provider anyway!
You're making an argument against one particular poison, but I don't see it as any better or any worse than the others, which all have enough complexities to fill a book.
> "If I don't know this tech stack, what do I do?"
If you don't know how to use a tool that does one job, either you learn it or you just delegate that job to someone who knows it already. But defending the practice of having your application code also responsible in dealing with deployment and packaging smells a lot of "coders are gonna code".
> If you don't know how to use a tool that does one job, either you learn it or you just delegate that job to someone who knows it already.
How is that different from "If you don't know how to use the tool (Go), either you learn it or you just delegate that job to someone who knows it."?
> But defending the practice
Woah there cowboy, I wasn't advocating an architectural decision suitable for a FAANG. I was "defending" the position of "why you an extra tool for this one program that has no other dependencies?"
> of having your application code also responsible in dealing with deployment and packaging smells a lot of "coders are gonna code".
In much the same vein, switching to Ansible, Fabric, etc simply to install a single binary to a single place smells a lot like "resume-driven-development".
You are being needlessly picky here, Golang is in fact a very viable replacement for both sh/bash/zsh and Python scripts, it compiles to a single static binary, can be cross-compiled to every supported OS on another host, and is thus easy to distribute.
I already started replacing parts of my shell scripts with Golang programs and the experience is miles ahead.
And if the project is already written in Golang, it makes even more sense if the dev is also partially the DevOps. Why wouldn't they make the experience as seamless as possible for themselves and the rest of the dev team? What good would Ansible do if everyone has to yet learn it?
As for the person you originally replied to, I'd definitely make a separate program that provides tooling related to the main program... but that's the only different thing I'd do.
Your snarky "coders gonna code" comment is detached from reality, we have jobs to do, and within deadlines.
> it compiles to a single static binary, can be cross-compiled to every supported OS on another host, and is thus easy to distribute.
You could say the same thing about Java or C++. Would you argue then that the best way to deploy java applications is by writing a java library and embed into your program?
> if the project is already written in Golang, it makes even more sense if the dev is also partially the DevOps.
I really don't see how that one follows from the other, and I really don't understand this conflation between roles and tooling. There is a lot more to "DevOps" than packaging/distributing/deploying. It's not just because your application code is in Go that your tooling needs to be as well.
> Why wouldn't they make the experience as seamless as possible for themselves and the rest of the dev team?
Because it's a fragile solution. You are tying the implementation of your problem to a very narrow space out of immediate convenience. I've had my share of projects with "experienced" node.js developers who are trying to do everything in node instead of reaching out of their comfort zone and looking for existing solutions elsewhere.
> we have jobs to do, and within deadlines.
Yeah, and a lot of those jobs could have been made in a fraction of the time if the developers were not trying to use their screwdrivers as hammers, and put some effort to figure out how to use the hammer in the first place.
> Yeah, and a lot of those jobs could have been made in a fraction of the time if the developers were not trying to use their screwdrivers as hammers
Did you deliberately ignore the part when I asked why would a team of devs learn Ansible if they don't know it? No, it's not fraction of the time at all, in fact it might balloon to weeks in the ideal bad conditions.
Most people would opt for something they already know and that's a fact of life. Learning Ansible or Fabric on-demand is not exactly an optimal expenditure of time and energy, not to mention that a proper DevOps professional would know various gotchas that a learning dev will absolutely miss. Sure the dev will EVENTUALLY get there. "Eventually" being the key word. Why risk botching stuff in the meantime?
> Because it's a fragile solution
Says you. I do Rust scripting in my Rust projects, Golang scripting in my Golang projects, and Elixir scripting in the Elixir ones. Works really well. Not necessarily for deploying and DevOps work, mind you, but for a lot of stuff like "populate me this sample data so I can experiment with the app" or "start only these parts of our app so we can do an integration test with them after" etc.
I also did part of the deployment actions, not all though, if that makes a difference. I don't know the exact situation of the person you replied to.
> You could say the same thing about Java or C++
Huge, huge disagree here, probably the biggest I ever expressed on HN. :D
So many environment variables, compiler switches, and various runtime dependencies and so many other things that it's not even worth starting, it can fill up a PhD thesis and have room for more after.
Your statement cannot be more wrong if it tried. I have several serious objections to the Golang the language that prevent me from adopting it as a main career choice -- but its portability and distribute-friendliness are one of the best in the world right now. You can criticize it for a lot but criticizing it for these two things would be severely uninformed.
> I really don't see how that one follows from the other, and I really don't understand this conflation between roles and tooling. There is a lot more to "DevOps" than packaging/distributing/deploying. It's not just because your application code is in Go that your tooling needs to be as well.
Sure there's more to DevOps, I agree, but I was under the impression that the guy you replied to was tasked with doing part of DevOps so he found a way of doing it with less friction.
Conflation between roles and tooling is unfortunate indeed and I sympathize with your statement there. It's simply an artifact of imperfect work conditions (so 99.999% of all work places).
I can relate to your frustration about non-optimal processes but you are overreacting a bit.
Can their homegrown solution backfire? Absolutely, of course it can. But often times we have to make do with what we have right now and can't optimize for the far future.
If a DevOps wants to pick up the torch and do the job properly they are free to do so but yet again, I was under the impression that the person in question was wearing several role hats at the time.
> "populate me this sample data so I can experiment with the app"
Which is well within the role of the developer. "Deploy to Hetzner" is not.
> I was under the impression that the person in question was wearing several role hats at the time.
Sure, I also have to do everything on my own in my own projects. But again, the function determines the tool, not the person performing the job. When I'm wearing my application developer hat, you'll see try to do as much as possible with Python, but that doesn't mean that I will try to turn every problem into a python script.
> Which is well within the role of the developer. "Deploy to Hetzner" is not.
Well, OK, I can't argue with that as much. Maybe the guy truly found it easier to do so? No clue. I know I'd definitely try with a shell script, then with the project's language, and only then with a dedicated tool. As much as I try to be a good senior backend dev -- and I know a lot of stuff and tooling that are not at all mandatory for my role -- I still can't find the time and energy to become a properly good DevOps; too much investment, and for specific cloud platforms. To this day I can't justify it and can't make myself "git gud" with AWS.
> When I'm wearing my application developer hat, you'll see try to do as much as possible with Python, but that doesn't mean that I will try to turn every problem into a python script.
No you really should not, Python is a nightmare in portability. I can trivially use venv-s and other tooling to make sure stuff is isolated and works fine but the whole house of cards is still always just one `brew upgrade` command away from falling apart. Eventually I've given up. If Python can't be trivially made portable then it's a bad tool for scripting, no matter what many people think and say (in fact, a lot of people using a tool often times speaks badly for its quality; to me "Python is good" is a mass delusion but that's a huge topic in itself that I won't tackle here). The fact that many people don't know anything else but Python is not an excuse.
Golang is a much better choice for quick scripting and that's a demonstrable fact. You can parse CLI flags and use their values and even throw some validation is some 10-15 coding lines and from then on you can use various super-useful libraries; there even exist a few that emulate a number of UNIX tools so you can still have a single binary that can do a lot without depending on anything except a kernel being present in the system (which also makes the resulting binary very convenient in constrained / container environments).
You might think I am getting hung up on details but I felt that I have to debunk the idea that Python is a good scripting tool. No it really isn't, and most Python devs never test on a random Mac or Linux machine where the dev is NOT using Python as a primary language. It's a fragile mess.
But now we're arguing in circles about familiarity. :) I recognize the irony. But for what it's worth, I didn't know Golang and made it a goal to learn it long time ago because I liked its distribution story and (mostly) ease of use (with a bunch of caveats nowadays, sadly).
Well, maybe he never wants "someone" to deploy his project. I think it is perfectly legitimate to handle it this way for a one-man hobby project.
For a project where multiple people are involved, I think it's better to split concerns. Deployment should be handled transparently and reproducible in a CI pipeline that the team members who are allowed to deploy have access to.
But if you are a one-man show (and you want to keep it that way) you are in no condition to tell what would be the best practice for a team.
Yes you are. You are telling yourself what's the best way to do things in your own team, that is comprised of only yourself.
If your argument can only be made in a very specific set of conditions and only applies to yourself, it's just a preference.
I don't care how you prefer to do things on your own, but if I were interviewing you for a senior position in a team with a handful of people, and you tell me that's how you want them to work, it would be an almost immediate NO HIRE.
Yes, but you are moving goalposts. I was under the impression we were talking about the one-man army case only, not how they conduct themselves on interviews which is a VERY different topic.
The blog post was about the context of a team with multiple people.
OP's comment was "I do this for my Go project's" and goes on to describe how to use a (open source) library that even deploys to Hetzner.
So, either the commenter was trying to extend their practice to the point of the blog post or the comment was just expressing a preference when working by themselves, which has little to do with the content of the post.
Sure, I don't think we disagree as much as I thought an hour or two ago, I am still little puzzled why you found it so bad though.
If a deployment procedure is easy for non-DevOps people then by definition that also makes it friendly to programmers, so why wouldn't they do it if it they can do it properly with minimal effort. But I do realize that "properly" is doing a lot of work in this sentence.
Not OP, but I presume that one could add `-deploy-aws` cmd-line to the Go binary with whatever steps AWS requires.
Yeah, this is the way to end up with ad-hoc, bug-ridden, half-baked implementation of fabric (or ansible, salt, helm, nix, or any tool that is more appropriate for this job)
And you haven't addressed the issue that whoever is performing the role of DevOps on this project must know Go.
If your person doing DevOps can't learn GoLang, you have bigger problems.
Nice "No true Scotsman", and you are completely missing the point.
You are erroneously invoking a fallacy that does not apply here. A qualified DevOps engineer will be able to pick up GoLang. I didn't claim that a DevOps engineer already knows GoLang, but that they're able to pick it up. If I'd say only true devops engineer's already know golang, you'd have a point.
Unfortunately, you missed it too.
> qualified DevOps engineer
Which is not something that one professional needs to be in order to have to perform some duties that fall within the role of DevOps.
I am, first and foremost, an application developer using Python and JavaScript. But I also have extensive experience:
- setting up Docker Swarm clusters
- managing cloud infrastructure with Terraform
- configuring deployments with Ansible
- setting up observability on Prometheus/Loki/Grafana stack
- putting up together CI servers for my team
- setting up backup/restore infrastructure.
- deploying and managing internal OSS tools for a dev/data team: Gitea, Taiga, Redash, Apache Airflow, Baserow...
These are all tasks that could be part of a "DevOps Engineer" job description, but if you tell me that "to work with a primarily Golang dev team you need to pick up Golang", I'd be assuming that you are not looking primarily for a DevOps Engineer, but a Go Developer who can do DevOps. And because I am not a Go Developer, you'd be putting my whole application on the bottom of the Stack.
Your point, that doing this ad-hoc in go results in a half-baked shitty reimplementation of ansible/chef/fabric/whatever, was well made. I totally agree with it.
My point was, is that the DevOps person needs to be smart enough to pick up Go, which I'm sure you are.
Any hiring manager that puts your whole application on the bottom of the stack because of a lack of experience isn't a good hiring manager and isn't worth your time.
Do fabric,ansible, salt, helm, or nix have the functionality of deploying to cloud infrastructures? There is a lot more to deploying to cloud infrastructures then just provisioning a server.
I've never seen these tools used in this capacity, especially nix.
I think such things have existed, but Nix deployment tools being relatively small projects with small teams or solo devs, a more common approach is to piggyback off of other IaC tools provisioning cloud infrastructure, e.g. by templating Terraform in Nix¹.
From a Nix user's perspective, this leaves a lot to be desired, because tools like Terraform are much less reliable than Nix tools that target local machines, and their management of, say, a VPC, is much less comprehensive than NixOS' management of an individual operating system. And they're slow as hell. But for the most part these issues are inherited from the APIs cloud providers expose, which don't meaningfully or uniformly support immutability.
--
One could argue that if you are at the stage of "provisioning a server", then you should be using something like Terraform or Pulumi, not configuration tools like the one I mentioned.
Anyway, for the specific case of "deploying to hetzner":
- ansible has a whole collection of modules for hetzner: https://docs.ansible.com/ansible/latest/collections/hetzner/...
- For Nix, there is NixOps
- if no one in the team wants to learn any of that, Hetzner provides a CLI to interact with their whole cloud.
To repeat: the last thing I'd want from an application is to have its own deployment system.
I tend to use go for "scripts" in non-go repos (typescript, C, etc) just for the simplicity of "you only need to install go" and pretty much all functionality is included in the standard library (usually something like JSON/XML/CSV parsing).
I'm OK with python for scripts, but go "just works".
Python can be annoying for scripts because they are often hard to run on another machine due to the dependency story.
I think Go scripts have a much better odd of “just working”with only the Go toolchain installed.
Go feels especially suited for moving between application code and command line tools.
I do this as well, but I pass two main arguments to my main executable task, and args. For example:
go run cmd/server/main.go --task db --args migrate
go run cmd/server/main.go --task cron --args reset-trial-accounts
etc.
I use https://github.com/spf13/cobra religiously for this kind of thing - it handles all the annoying corner cases of parsing flags, and also has an intuitive notion of subcommands (with basic usage/help text generated) for picking which task you want to run with positional arguments.
I'll have to mess with it and see. It looks like it could make the general CLI ux better :)
Use the right tool for the job. The authors complaints seem to stem from not properly maintaining the supporting scripts in their project, which isn't at all a function of the language.
Assuming that a different language is better for a specific job, there is a cost associated to having another language.
When it comes to infra, someone's preferred language may not always be available. Infra and devops is more about plumbing, and plumbing gets messy.
It can go the other way as well. tcl was originally conceived as an embeddable scripting language that can drive GUI applications, and has been very successful in that use-case. It has a few flaws, but I would have preferred it to been embedded in the browser instead of making up Javascript back in the day.
There's no cost associated with bash unless you're hiring muppets.
I encountered bugs and had to fix glaring problems in shell scripts literally everywhere I worked.
Most devs are not deeply steeped in Unix, and don't really know what they're doing with this as it's not their main job. They kinda clobber something together that "kinda works" – right up to the point it doesn't.
There is no shame in that; I do that with some stuff too – we all do – because there is no point learning something in-depth if you use it once a year. This is not a "zomg devs are stoopid" rant, it's just an observation of fact.
I've written tons of shell scripts and and I love how it enables you to do something useful very quickly in very little code. Last week I hacked together a "ncurses"-type terminal music player in less than 200 lines which actually works fairly well, which I thought wasn't too bad for an evening of work.
But in general I distrust other people's shell scripts, and I found that 9 times out of 10 that's warranted. I tend to push for zsh as it fixes the most egregious footguns and limitations, but there's still plenty of things to do wrong, and the syntax can be unfamiliar.
> I encountered bugs and had to fix glaring problems in code literally everywhere I worked.
Fixed it for you. This is why they pay us the medium bucks.
Obviously that implies that the problems and bugs were much worse than average.
Something that “kinda works” actually does work. Fixing the edge case when it is actually a problem may actually save time compared to spending extra time up front on things that might never be a problem.
I should have been clearer about that: but I'm not talking about theoretical edge cases. I'm talking about "I tried to run it and it doesn't work", or worse (e.g wrong behaviour).
Shell scripts are often the peak of "works on my machine".
The problem mostly stems from the fact that MacOS folks don't have compatible Unix utils to the Linux folks, and the majority of developers are on MacOS but everything deploys on Linux. The easiest way to fix this is to install GNU utils on mac so you're using the same bash and system tools everywhere.
That is one problem, but there are others: assuming things like current working directory, assuming certain tools are installed and failing badly when it's not (e.g. doesn't do anything but no error, has the wrong behaviour and no warning), and things like that.
I’d agree: every critical shell script should work in all the places it currently needs to run. But, I think it’s reasonable to iterate here towards perfection: every time a new employee shows up or new environment comes up is a chance to uncover these problems and fix them.
Waiting for edge cases to break something, when a solution is known, is a symptom of lousy engineering culture.
There’s a balance here between solving the problem you actually have (YAGNI) and responsible engineering. Obviously you should test that your code works as expected in all the current environments but, sometimes, ensuring that all the edge cases are handled is just over-engineering.
People who say stuff like this must not have worked on enough Linux environments. Bash scripts aren't fully portable, and there are absolutely differences in environments which prevent portability of bash scripts. Since using Nix, I've realized the number of bash scripts which are only superficially portable are drastically higher than the number which are actually portable. As one trivial example, how many bash scripts use the more portable shebang of `#!/usr/bin/env bash`? Extremely few. And every one which hard codes `#!/bin/bash` has created a less portable script most likely without ever having realized it. Guess they are all muppets though right?
If you standardize the Unix utils you use and the bash version you use, it's really not hard to avoid a lot of the issues you're talking about. To get all developers in an org working on compatible ubuntu-like systems, just mandate use of WSL on Windows and GNU unix commands on mac so everything is the same for everyone.
That being said, if you and your team just don't have a lot of bash/unix experience, that is probably a good argument for writing your utility scripts in whatever you can actually write without a ton of bugs.
Would you complain about node bugs if your org wasn't standardizing the node version people used and some people were still on node 12??
Not true. Bash is filled with foot guns. In fact even HN has articles hitting the front page every other month about another bash footgun and how to avoid it. Any Muppet can write a bash script to do the thing, but only somebody with a lot of bash and unix experience can make sure it's dodging the hundreds of common and obvious but incorrect way of doing things.
No security concerns with your projects either! ;)
Unfortunately, the world's full of muppets, present company not excluded.
For everyday bash usage, sure.
But I worked with an extremely bash-skilled guy at IBM many years ago who went out of his way to write his scripts in the most arcane way possible. On purpose.
It was his form of job security, from rough memory when I asked wtf?. ;)
I wasn't project lead on that one, so we just had to put up with it... :(
I would have disagreed (obscure syntax thats hard to memorize if you write bash rarely) but LLMs have really removed all of that
My boss introduced a bug into our build system because of misunderstanding bash semantics. LLMs really haven’t solved this.
Bash makes everyone a muppet if you push it hard enough. (True of all tools, perhaps. But you don’t have to push very hard with bash.)
With Bash quoting rules it’s not like you have to push, it’s more like you don’t pull hard enough to keep it from going over the edge.
Shellcheck basically solves this problem
That's like saying opioids solve the cancer crisis, and Naloxone solves the opioid crisis.
No, it’s like saying shellcheck solves most of the challenges with writing shell scripts, which it does
But your metaphor is missing the apt comparison of shell scripts with cancer and opioids.
> there is a cost associated to having another language
Could you elaborate?
They’re probably talking about the “cost” to create and maintain a custom interface between languages
> The authors complaints seem to stem from not properly maintaining the supporting scripts in their project, which isn't at all a function of the language.
That and often making the scripts overly complex and a software project in themselves. Changes are that if you're building/deployment scripts are this complex that you are not leveraging modern tooling and platforms efficiently and re-invented the wheel in script form.
A recent example I encountered was an Ansible environment. While looking how other teams had set up their playbooks for something, I came across an extremely complex one. It basically pulled a bunch of java apps from artifactory and wrapped them in complex bash logic. All of this to do some conditional checks and send out a mail through a custom java mail client.
This amounted to over a hundred lines of code that I could replace with a handful of lines in Ansible.
> Use the right tool for the job.
yes, this. Engineers frequently try to converge everything into the one-true-way and go too far. (I blame this thinking for phones without buttons or touchscreen cars)
Makes no sense to write scripts in C++. I've worked on projects where people have tried to do this, and they end up being more fragile, cumbersome and not that useful.
I think python is ok for cross-platform.
a shell is good for command line/os logic.
seriously a shell script with 'set -e' at the top, and a lot of command invocations and not much else is pretty easy for a group to maintain.
saying use the right tool for the job is about as helpful as xkcd 927
Recognizing that “what language should a project’s scripts be in” is too broad of a question to have much more specific guidance that is applicable universally is not a problem.
Pretending it is not, OTOH, is.
Scripts should be commented with intent. (I can figure out what it is doing, but I have no clue WHY)
Scripts should be testable.
Scripts should use functions with appropriately scoped variables. (This really helps with testability)
Scripts should list assumptions.
Scripts should CHECK assumptions.
Scripts should call commands with --long-style-options instead of -L, especially uncommon options.
---
As someone who migrated a couple hundred shell scripts over the past year, I'd rather have these done before I ask someone to write a script in C.
edit: and for ${deity} sake, use shellcheck.
I think you mean "${deity}" in case of spaces ...
> Scripts should call commands with --long-style-options instead of -L, especially uncommon options.
Too bad `getopts` only supports single-char options. :p
I'm not a fan of getopts
And to be more specific - I meant when the script calls someone else, not how it handles options.
The authors main bullet points are:
> The learning curve is minimal since you already know the corners of the language.
Learning a new language shouldn't be difficult. Programmers are expected to familiarize themselves with new tech.
> Internal language APIs can be leveraged, which drastically changes the mental model to write the script (for the better).
This is true. I myself have encountered situations where I needed to call into my C API's from a higher-level language, but since most languages can interface with C this hasn't been an issue for me. For example I've interop'd Go+C, Python+C, and Lua+C.
> Scripts feel more natural and eventually maintainability increases. Team members are familiarized with the language!
This sounds like a subjective rehash of the first point.
> Development machines compatibility increases. Windows users can finally run all scripts.
This is true if you're talking about shell scripting, but if you're scripting with a general purpose programming language then it shouldn't be an issue. What language (besides shell) isn't portable these days? And even then, you can install a *nix environment on Windows.
> Learning a new language shouldn't be difficult. Programmers are expected to familiarize themselves with new tech.
I wish any large company agreed with this. I've worked for a company that on boarded every single new engineer to a very niche language (F#) in a few days. Also, everybody I worked with there was amazing. Probably because of that kind of mindset.
Meanwhile google tiptoes around teams adopting kotlin because "oh no, what if other teams touching the code might not be able to read it". Google is supposed to be hiring the brightest but internally is worried the brightest can't review slightly-different-java.
It's shocking how everybody acts like senior engineers might need months to learn a new language. Sure, maybe for some esoteric edge cases, but 5 mins on https://learnxinyminutes.com/ should get you 80% of the way there, and an afternoon looking at big projects or guidelines/examples should you another 18% of the way.
> Sure, maybe for some esoteric edge cases, but 5 mins on https://learnxinyminutes.com/ should get you 80% of the way there, and an afternoon looking at big projects or guidelines/examples should you another 18% of the way.
Not for C++, and even for other languages, it's not the language that's hard, it's the idioms.
Python written by experts can be well-nigh incomprehensible (you can save typing out exactly one line if you use list-comprehensions everywhere!).
Someone who knows Javascript well still needs to know all the nooks and crannies of the popular frameworks.
Java with the most popular frameworks (Spring/Boot/etc) can be impossible for a non-Java programmer to reason about (where's all this fucking magic coming from? Where is it documented? What are the other magic words I can put into comments?)
C# is turning into a C++ wannabe as far as comprehension complexity goes.
Right now, the quickest onboarding I've seen by far are Go codebases.
The knowledge tree required to contribute to a codebase can exists on a Deep axis and a Wide axis. C++ goes Deep and Wide. Go and C are the only projects I've seen that goes neither deep nor wide.
The hard part about learning C++ is the same stuff that’s hard about C, the rest is just stuff you can google TBH
> It's shocking how everybody acts like senior engineers might need months to learn a new language.
I've seen instances where people were worried it would take someone a month longer to fully onboard. Completely ignoring the fact that *fully* onboarding in any complex environment is going to take several months anyway.
I'd also argue that in the case of setting up your scripts, it matters even less. Automation scripts shouldn't be so complex that you fully need to know the ins and outs of the language they are written in. If they are, then maybe it is time to re-evaluate your building/deployment process.
Furthermore, I'd say that historically, both bash and python should be languages any semi competent developer at some point learns to work with to some degree. I say historically, because it always has been difficult to not encounter it when doing software development in the past... 20 years or so. But with modern environments and deployments it is more feasible as much more is abstracted away in pipeline yaml syntax.
>It's shocking how everybody acts like senior engineers might need months to learn a new language.
It's true for some languages, like C++, some might wrongly extrapolate from that. I agree with your general point though. If your senior engs can't learn Python/Ruby/F# etc in a few days to a level where they can contribute, you might want to ask yourself what senior means in your org.
> Meanwhile google tiptoes around teams adopting kotlin because "oh no, what if other teams touching the code might not be able to read it".
That is highly ironic, seeing how often Google tries randomly shoving Dart down people's throats.
Ridiculous. You must be talking about learning only the syntax of a new language. Truly internalizing how to wield a new language will take months to years.
A dev that switched from language X to Y "in a few days" will just write X using Y syntax. Here's a good example of going from Java to Python: https://youtu.be/wf-BqAjZb8M?t=1327.
>> The learning curve is minimal since you already know the corners of the language.
> Learning a new language shouldn't be difficult. Programmers are expected to familiarize themselves with new tech.
But in practice, it is. Maybe you're on a team of elite 10x programmers that can quickly become experts in anything, but that's rare. A lot of programmers don't want to bother coming up to speed with the quirky choice of some past developer. And a lot of places have programmers that aren't even that good with the "project main language," and just lack lack the ability become productive in that quirky choice in a reasonable amount of time.
Defensive coding against organizational problems is not a bad thing.
> But in practice, it is.
Indeed. There's also 'learning' and learning: really knowing all the nooks and crannies of a language, learning the standard library and learning popular third party libraries all takes time and makes a big difference to code quality.
Depends on the language. I am competent in half a dozen languages, and write a lot of functional code.
I have written a few projects in Haskell, but I freely admit theat when I read any of Simon Peyton-Jones papers (eg the one on build systems referenced on HN recently) I am in awe of the way he can map concepts into Haskell code.
> But in practice, it is. Maybe you're on a team of elite 10x programmers that can quickly become experts in anything, but that's rare.
Pray that you aren't, because that is a recipe for misery.
> Learning a new language shouldn't be difficult. Programmers are expected to familiarize themselves with new tech.
Learning it is fine, but will you still know it 2 years from now when you need to modify the script? Me, definitely not.
Could you re-learn it then if you need again?
Yep, but it all adds a lot of pointless work.
I'm so much happier since I started using JS for scripts, rather than the monstrosity that is Bash.
I think Crystal is a popular one which doesn't yet support windows fully, at least last time I checked
> Learning a new language shouldn't be difficult. Programmers are expected to familiarize themselves with new tech.
I think that expectation is a problem.
I mean, sure, you can't use one single technology for any non-trivial project. But, on the other hand, is it really faster to read the spec and short comings of 20k different `left-pad` type "tech"?
I think there's a line to be drawn: each new $TECH added as a dependency is a liability. The expectation should not be "throw every single tech we can think of into there, because programmers are expected to learn new tech".
The calculus really should be "Each new $TECH we add increases our hiring burden, our ramp-up times, our diagnostic burden for when things go pear-shaped, our cognitive load when actually adding features, our tests, and eats into our training-time budget."
Those are a lot of downsides, so before padding their CV the responsible developer should be balancing the trade-offs.
Unfortunately that is rarely how it actually works.
I have a C++ project where the documentation is generated using a script that I wrote in C++. Woof. I didn't want to add a compile-time dependency on another programming language, but C++ is rough as a scripting language. If my script needs were any more complex, I'd be thinking hard about how bad a compile-time dependency on Python really is.
I'm not sure I've even seen a recent, large C++ project that didn't depend on Python or some other external scripting language just to build it, so it's kind of hard to imagine using C++ itself to solve the problem that using C++ creates.
I worked with a guy in a C++ RPC team (think Envoy, but proprietary). He wrote the build tool that was used by our team, which maintained several fairly large C++ programs and libraries. He wrote it all in C++. He was of the opinion that most scripting tasks on the team could be accomplished with a small C++ program. It helped that we had a portable kitchen sink of libraries at our disposal, but he wasn't above using std::system to avoid the hassle.
someone added a script written in Rust to our project. They compiled it for ARM. So only people on Macbooks can run the damn thing. Nice one Rust bro.
No! General purpose scripts should almost always be written in bash. It's basically the best language for doing simple things with files, it's universally available and it makes almost no assumptions about the environment in which it executes.
Have windows users use WSL (the VSCode integration is great!), and mac users should install GNU tools since the system tools are obnoxiously incompatible.
The only time I've found that scripts should be in another language is:
1. You need to call libs that to do something fancy and it would be too troublesome to make a small Unix style executable to do the thing. 2. The developers on your team lack Unix/bash experience, and you don't trust them to learn in a timely manner (sad).
> Have windows users use WSL (the VSCode integration is great!), and mac users should install GNU tools since the system tools are obnoxiously incompatible.
At that point you might as well target Python 3.6. Seems like the same hassle for the developer to install and you don't have to worry about wonky differences for users who haven't installed GNU tools, but still think they can run your script because it says `.sh`
That's not correct, because then you have to make sure all your docker images and deployment environments have the correct version of python, and managing python installations for incompetent Mac devs is probably more work than managing a GNU utils install since python installs have been known to break easily whereas the core utils are rock solid. Plus writing file manipulation/subprocess scripts in python is really awkward compared to bash.
Sad that you're being downvoted for the truth.
Unless you're doing some extremely niche work, Bash >= 3.2 (because Mac) is nearly always going to be available. Even if it _isn't_, there will still be sh or dash, and it's not _that_ hard to stick with pure POSIX for most small uses.
The last time I (by which I mean my team) rewrote a script from Bash into Python was because it had gotten unwieldy over time, I was the sole maintainer, and very few other people at the company knew Bash well enough to understand some of it. The upside was testing frameworks in Python are way better than Bash.
First you write in shell without knowing the language, and blow your foot off. A few years go by. "I should use a _real_ language, I'm not an amateur anymore." So you write everything in Python. A few years go by. "I should learn shell, and use it only when appropriate."
Maybe this is what Perl is for, but I never learned it.
> If they can use the main language, awesome. If they can’t, a higher-level scripting language with native support (e.g., Python) should be adopted, since it provides the means to increase maintainability in the long run.
I think this point is especially important for C++ projects. It is my gut feeling that C++ and Python cluster very closely in terms of developer familiarity. That is, a C++ developer very likely is also a passable Python developer.
Given that it tends to take more time to write a C++ program than the equivalent Python program, the stable result is that many C++ projects 1) expose C++ to Python (via e.g. pybind11) and 2) write all scripts in Python.
And you get almost all of the benefits that the article suggests, because almost all C++ developers are also Python developers.
Disagreed from me. Though, I'm not sure how hard my disagreement truly is.
Discrete programs are superior in many ways, as you do not immediately incur maintenance costs to write them. More, they typically force you to have a discrete API that they work with, and then you can lean on that.
Yes, you can do all of this with modular programming techniques. Indeed, "unit tests" are easy to see as similar to what I'm advocating here. Such that I think my assertion is softer than many folks are probably seeing. If you are "scripting" something to add data to the system, it should emphatically not hit the database directly.
I don't know where this lands me on the infrastructure as code (IAS) debate. I'm sympathetic to the desire. I start to think of it as navel gazing when I see some of the very engineered testing practices some people take those to.
> For example, writing scripts on JVM languages would require additional effort to build a toolchain that compiles and runs files on the fly, with a short start time.
That hasn't been true since at least Java 11. You can execute any .java file using `java foo.java`. No compilation required. You can reference dependencies using the usual classpath options etc.
Startup time is minimal.
Been using such scripts in exactly the way the author suggests for years. Much more pleasant than messing around with maven or gradle plugins.
> You can reference dependencies using the usual classpath options etc
I don't know anyone who thinks that a java script (being the main project language) is going to surpass:
#!/bin/sh
dep-start.sh
./gradlew clean bootRun
Maybe there are outliers with this "hot take". The result of years of projects (even changing hands), are a lot more instructive than someone posing theoretical value of reusability.
That example doesn't even need a script, it could literally be a single gradle task. Some things are lot easier to do in a platform independent way in Java (or Groovy) than in shell scripts. And unlike the latter, the former can be tested just like any other part of the code base.
It always bugged me that build scripts are hardly ever tested or engineered. They just grow into giant balls of mud.
I think while the basic idea of advocating for stack consistency between the main project and any support scripts is very nice, I always find it not worthy to pursue in practice for several reasons: a lot of the “environment” around the main codebase will be a weird mix of YAML, bash and a scripting language like Python or Ruby for things like gitlab, airflow, GitHub actions, etc. Given this heterogeneity of the project environment and additional complexity of “forcing” something to use a language it might not be very well suited for, makes this really a no brainer for me: use the most convenient tool for the job. Plus as a JVM bound developer I love me my occasional Python
I agree with this sentiment. At work our main codebase is in Rust and early on we wrote some CI tooling using the cargo xtask workflow and it adds a lot to compile times when we need to rebuild it (which is often since it depends on several of our main codebase's crates). It really kills iteration times.
This is a mess and added a lot of extra complexity to the build pipeline since now we had to manage an additional xtask container. Python is very well suited for CI scripting, xtask should only be used for things directly run by the developer, and even then Python may be a better choice for most things.
Keeping scripts in the main project language usually means the monolith gets larger and more like a final boss than something the team controls.
Use the language that best fits the job. For me that's TypeScript on a serverless platform, SwiftUI front-end and Java for business logic.
I used to feel like I was winning using only Java for UI, Web and business logic, but the complexity became immense. It's too easy to create yet-another-internal-API that gets forgotten until it needs to be refactored.
Also learning a language gives you a new perspective on software engineering, a bit like learning a foreign language gives a new view on human culture.
I like to take this a step further and try to have all my most important tooling be written in my projects major languages. It's not a must have, but having a build system for a go project that is also go under the hood means I'm more comfortable diving into the source if needed.
I don't think it's a requirement, but it's an advantage
> The learning curve is minimal since you already know the corners of the language
But that does not imply you know how to get the creation date of a file or how to zip a directory.
> Internal language APIs can be leveraged, which drastically changes the mental model to write the script (for the better)
That sounds like a rather empty statement.
> Scripts feel more natural and eventually maintainability increases. Team members are familiarized with the language!
Don't you think familiarity with the OS (or OSes) comes first? And that knowledge usually comes with the knowledge of a shell or batch language.
> Development machines compatibility increases. Windows users can finally run all scripts.
Script development time increases, too. And Windows has WSL nowadays.
For some languages this can make sense, but I think most languages aren't suited for installing things, manipulating the filesystem, etc.
I think of scripts as the middleware between the operating system and the shipped code. The code is controlled by the operating system, so the operating system's tools should be used to manage it. In many cases this means bash or make.
Plus, I don't want modern Javascript to do things on the filesystem that would require importing dozens of projects that I need to vet before using. Golang or Python perhaps, but the buildchain for modern Javascript is hell as-is; it doesn't need another layer of Javascript.
Where I work, most programming is C#. We use a lot of LINQPad scripts, which is a lightweight environment to run C# in.
There's also CS-Script
My preferred lightweight environment to run c# in is a console app.
I have never understood the appeal of LinqPad whatsoever.
Next time you want to write a few lines of C# to verify a detail about syntax, count how many clicks you need to go through to create a new console app project / solution.
It's much faster to just hit the + in LINQPad.
It's quite useful when I'm reviewing someone else's code, or if I'm "in the zone" in a large change. I can verify some syntax in a few seconds, as opposed to the minutes it takes to make a throwaway console app project / solution.
It is kind of like csharprepl[0] but more powerful for the scenarios it targets. However, since it only supports Windows it's a non-starter personally so I never used it much.
I've always found it weird that the NPM ecosystem doesn't have something like Rake from the Ruby world to run tasks. Javascript things tend to be VERY task heavy, with dev servers, bundlers, testing, and coverage all being defined in the project normally.
The package.json scripts "work", but it's quite clunky, and relying on shell scripts that run node.js scripts causes issues. (cross-env solving a problem that really shouldn't exist.)
I’m of the opinion that the package.json scripts was a mistake (although not a huge deal in the grand scheme of things): it solves 80% of the problem, which removes enough pain that there’s not enough incentive to solve the last 20%.
I’m sure there’s something like rake that exists for Node, but the community won’t standardise on it because it’s not enough a problem
Gulp is still a thing. Or even Grunt if that’s your poison.
The clear example of only having an hammer and every problem is a nail.
I usually go with the Google Shell Style Guide: https://google.github.io/styleguide/shellguide.html
* If performance matters, use something other than shell.
* If you are writing a script that is more than 100 lines long, or that uses non-straightforward control flow logic, you should rewrite it in a more structured language now. Bear in mind that scripts grow. Rewrite your script early to avoid a more time-consuming rewrite at a later date.
* When assessing the complexity of your code (e.g. to decide whether to switch languages) consider whether the code is easily maintainable by people other than its author.Absolutely not. Use the right tool for the job. There's nothing particularly wrong with shell scripts, as long as the people who maintain them are reasonably good at not making them into a dangerous nightmare waiting to delete your home directory or something.
Using the same language allows for eventually placing those scripts in background jobs.
In Ruby/Rails, the concept of ad-hoc script execution is baked in via rake.
For Java projects, scripts can be written in a JVM language like Clojure or JRuby. Can still leverage the Java code in the rest of your project, while writing in a more dynamic, interactive language.
If you think JBang isn't dynamic enough for such scenario, check this out: https://scala-cli.virtuslab.org/scripting
Scala is a lot less exotic than Clojure or JRuby to most Java devs, as expressive as Groovy, yet fully type-checked.
Clojure -> Babashka, better than regular bash
Given the resultant comment section I clicked into the article expecting the most sith-like of absolutist proclamations taking itself extremely seriously and actively fanning arbitrary ideological flames.
Instead I found a concise, measured and sane opinion that I honestly can’t disagree with. Like, sure, if your language broadly supports the secondary use case (scripting) why not? If it doesn’t, then the juice almost certainly won’t be worth the squeeze.
Scripts are usually no better than they have to be - they're not the main purpose of the system. Scripting languages handle all sorts of issues that take pages of code in other languages so they are the easiest way to do what you need and easy wins.
IMO some languages like C/C++ cry out for an embedded scripting language so that you don't write the basic, one off, performance insensitive parts of your code in a "hard" language. This is taking it the other way around - suggesting that as much as possible of your "main project" should be in a scripting language so that you're not wasting "hard" development cycles on areas that don't need it.
>pages of code
Code reuse.
The benefits of hard language: 1) I now can do interesting things like text parsing, 2) my scripts are cross platform, 3) I don't need to figure out how to deploy python everywhere in advance or on demand, 4) if the user has python, I don't need to tell him I don't like his python version and he must install a different OS, 5) I don't have python as an extra dependency, 6) I can reuse my main code in my scripts, 7) scripts are written in a language with a decent type system.
One small counter I have is that most software development has dependencies anyhow - and with compiled languages this is quite a pain in the arse to manage. So installing tcl or lua or python or ruby or perl is just a normal problem amongst other problems.
If you want type systems everywhere then I think that's another debate. That is really a total rejection of almost all scripting languages for all purposes.
I don't agree. From my exp really big problem, when you can't easy see borders between parts of project, and this is case with React and many other fullstack envs.
From other view, yes, it is not good, when for example on front you use strict typed lang and on back dynamic, so need to constantly do conversions.
So need some reasonable combination. Traditional Java+JS is looking reasonable. JS (front) +Python (back) is also reasonable. C backend from my view is weird, so should be some intermediate lang, for example Lua.
Well, it depends. If your project uses Zig, you obviously use the Zig build system.
If your project uses C or C++, you also use the Zig build system.
Advocacy - Maintain it With Zig:
https://kristoff.it/blog/maintain-it-with-zig/
HN discussion:
https://news.ycombinator.com/item?id=35566791
Reference:
Really interesting comments here. I haven't found an appealing option for scripts and CI ( I think I am allergic to yaml ), bash is just way too fragile. So I've decided to write my own scripting language that is written in Go, similar in many ways to Lua, but with first class support for executing other commands, running API tests, and manipulating data. Currently dogfooding with plans to share in a few months once I'm confident it's good to start getting feedback. So, one executable plus your scripts, cross platform, no yaml.
Depends on the project, depends on the team, depends on the language, depends on the script. Pick a sensible tool for the job, write understandable code, don't lose sleep over silly generalizations.
I would have agreed more with this a couple of years ago when maintaining a script written in an alternative language required me to know that language.
These days I'm much more comfortable both writing and maintaining code in languages that aren't my daily driver (like Bash or jq or AppleScript or even Go) because I can get an LLM to do most of the work for me, and help me understand the bits that don't make sense to me.
I still find it hard to believe that LLMs provide any real edge over the kind of googling/man-paging that's necessary for understanding scripts, especially given the false positives chatbots are known for. Typically language + feature + library is enough to look something up in under ten seconds for me.
Granted, this probably takes a fair amount of experience to even know what you're looking at well enough to search for it.
The good ones (Claude 3 Opus, GPT-4) are really incredibly good at understanding scripts. It's very rare that they hallucinate anything, especially important details for the more widely used tools that I tend to stick to.
I trust LLMs with tools like Bash and jq and ffmpeg which have been around for years. I wouldn't trust them with anything released within the past 12-24 months.
An example from just the other day: https://til.simonwillison.net/go/installing-tools
I wanted to understand this:
How does that @latest reference mention? There's no branch or tag on that repo called "latest".go install github.com/icholy/semgrepx@latestI tried and failed to find documentation. I gave up and asked GPT-4, which said:
> @latest: This specifies the version of the package you want to install. In this case, latest means that the Go tool will install the latest version of the package available. The Go tool uses the versioning information from the repository's tags to determine the latest version. If the repository follows semantic versioning, the latest version is the one with the highest version number. If there are no version tags, latest will refer to the most recent commit on the default branch of the repository.
Is that correct? I have no idea! But it still gave me more to go on than my failed attempts with the real documentation.
I appreciate the illustration! I can certainly see how this would be useful especially for tools like ffmpeg, which is both difficult to google and understand and doesn't have a large overlap with patterns other tools use.
Soon many projects will be written in LLM: a mix of whatever languages the models generated on given day + an LLM so that the maintainers can understand it.
We have some Perl scripts that are freaking immortal.
The rest of the code base started in Java, then Clojure, now it’s Go. The scripts are there still in their very-not-modern Perl style though. They have a self-evaluating behavior consisting of data blocks that are interpolated. To be honest, I’m not sure exactly how it works. Very discouraging for the casual passerby looking for some cleanup to do.
There is a reason shells and shell scripts exist. Reinventing the basic shell functionality seems pretty dull and better spent staring at a wall to decompress.
I always start with a simple shell script which does most of the work as simple as possible, even simpler is using `make` then introducing shell scripts when needed.
I try very hard to do just the opposite. If I need a script to do something existing code can't ,then it is most likely exciting and I'm going to use that energy to learn something new, be it a language or tool.
Here's a hotter take: your team colleagues are more likely to wear your clothes than to use your scripts.
Say this to your C++ embedded project (no lang rants now, it is what it is;)): ehrm, no, please no, ridiculous.
This really only applies to a subset of languages and projects with such languages. I wish developers had the courage to face the deeper problem (being unwilling to read code...older than a few weeks? Such a developer will face worse things in the furure, they should train their focus stamina)
Scripts should be written in JavaScript, only if proven insufficient a different scripting language should be used.
/Serious!
It does have Script in the name!
The performance concerns might creep up for scripts and tricky to figure out the issue if you are not internal abstractions.
Using Google’s zx I write my support scripts in JS and can use shell features like piping right in the same file.
Agreed. My scripts got much better when I abandoned Bash, and switched to Zx, which is basically NodeJS.
>> Scripts should be written using the project main language
... as long as that main language is a scripting language. Otherwise it's just dumb.
Also, tunnel vision by the author: "Almost all projects I’ve worked on have scripts we wrote to automate a repetitive process. "
Well almost all projects I’ve worked on have scripts we wrote to automate a ONE-TIME process. Like collect some data from the log to figure out a bug, fix it and forget about both the bug and the script. Automate it since can't manually process 30Gb of data and grep only can do so much. Sure as funk won't write the "script" in C++ but Python or Perl or something.
Ehhhhhh. I get the sentiment, but I think this really depends on what the "main" language is and how amenable it is to scripting.
Personally, I prefer writing shell scripts regardless of what the main language is in a given project. They're portable (more or less), and can e.g. detect missing dependencies and install them if necessary, which isn't possible with the main language if you're missing its compiler or interpreter.
man, I work in python but I really don’t enjoy writing scripts in it. Everyone here acting like it’s a scripting language. Culture shock.
I like the general idea, but it depends.
Sometimes these bash scripts are used in lots of contexts, local dev, CI pipelines, Docker build steps... good luck running Java or Rust in the last two. Even if you get it to work, good luck debugging if there are any issues.
I think it's a strong "it depends", if the script interface with data used/produced by the project, sure, seems natural.. If the script does something more system-near and the main language is not as convenient for it, then.. No.
I'd hate to do with typescript what we're doing in scripts with bash.. just like I'd hate doing in bash what we do in typescript..
So ... Python
I'll admit I'm not reading the article, but this is a hard no for C development. Yeah, it can be done not so terribly, but just use a normal scripting language.
In C#, I could see doing this.
Experience report: I did this for my company that uses Kotlin as its main language and it worked out brilliantly. We extended Kotlin Scripting into "HShell" and it swiftly replaced basically all uses of bash or Python for our internal scripting needs. The docs are public here, although the product isn't open source. It gives you a flavor of how it works though:
https://hshell.hydraulic.dev/14.0/
Advantages:
• It's as concise as bash but far more readable, logical and less bug-prone thanks to the static type system with plenty of type inference.
• IntelliJ can provide a lot of assistance.
• You can easily import and use any JVM library, and there are lots of them.
• Ditto for internal project modules if you work on the JVM.
• If you need to, you can easily mix in and evaluate Python/Java/etc using Graal. It has an integrated disk cache that can be used for storing file trees that manages free disk space automatically, and there's a command to set up a Python virtualenv in the disk cache transparently.
• We have a high level shell API that makes console, file and network operations as easy as in bash, and sometimes easier because the commands aren't afraid to deviate from POSIX when that would be more convenient. For example most operations are recursive by default.
• A smart progress tracking framework is integrated into every operation including things like wget.
• It's fully portable to Windows including edge cases like being able to set POSIX permissions on a file, add it to a tar, and the resulting tar will retain the correct permissions.
• SSH is deeply integrated, and so commands "do the right thing" automatically. For example all the commands take Path objects as well as strings, and path objects track which machine they refer to, so if you open up an SSH sub-shell you can easily copy to/from the remote machine using regular copy/move commands. Other commands are "smart" for example the wget() function given a path that's on a remote machine will execute curl or wget remotely rather than download locally then reupload.
Although this sounds like it was all a lot of work to build, in reality our main product is a kind of build system (it makes deploying desktop apps easy, see bio for link). So all that functionality is in actuality functionality we built for the product and just kept nicely factored out into modules. The work invested into the scripting-specific parts is probably a couple of weeks over a period of a couple of years, and it was well worth it given the number of scripts we have for things like QA, deployment, server management and so on.
Hard pass - many languages simply don't prioritize the experience of executing shell commands.
Tbh bash/sh don’t prioritise the experience of _writing_ commands, which might be worse. I’m decent with Bash, but to this day I don’t know how to properly document and parse flags (long and short form) and positional arguments (with validation, while we’re at it) so that it all works as I would expect.
There's no best answer. I do in manually in a `while` with `case` and `shift`. For anything more, I use environment variables or Python.
We should use YAML for all of our automation so that non-developers can build and maintain the automation, and then exclusively assign developers who are proficient in a general purpose programming language to build and maintain the YAML automation. </s>
Simply put the general purpose programming language inside the YAML automation.