I Got Sick of Remembering Port Numbers
gregraiz.comIt's like someone should make a file... maybe in /etc ... and put short names for services in it... maybe it could be called /etc/services...
And then they might code up some sort of service lookup tool thingy to use on the train wreck that is the modern web.
$ getent services gopher gopher 70/tcpAnd if they want name resolution, maybe even names that reflect the scope of its location like .localhost or .internal
Various services have existed, such as portmap(8), though NFS and similar services have often suffered from the "too complicated to debug" problem where devops (then sysadmins) would try turning the system off and then back on again in the hopes of resolving the issue du jour. You might get lucky and determine that node number three (of many) was cursed and leave it switched off for the Season of Mammon, more commonly known as Christmas, and to retire it quietly, later. Hypothetically.
Generally host and port mapping gets shoved somewhere into the configuration management layer and hopefully does not become too complicated (or have too many security holes) as this could vary from "configuration files and a few scripts" to database and services layers that few can debug, especially not a sysadmin at 3 AM in the morning running on an hour of bad sleep. Hypothetically.
but you want security, so NIS+
Heck, maybe even `resolvectl service`?
this is a nice idea, but idk why, in macos if i do `nc -l 127.0.0.1 gopher` and then try to open url "http://127.0.0.1:gopher/" - safari does not open it, no requests visible in the `nc` output.
also `curl -v http://127.0.0.1:gopher/` gives error message
so the ports are named, it is nice, but in practice it does not make life easier.* URL rejected: Port number was not a decimal number between 0 and 65535 * Closing connection curl: (3) URL rejected: Port number was not a decimal number between 0 and 65535> http://...:gopher
is it http or gopher? :)
i chose gopher port just as an example. try with any other service name mapped to a port number from /etc/services and the result will be the same. the OP's goal was to use many http/https services, so we are talking about many http(s) services.
i just wanted to make the point that even if you have service names in /etc/services, it is not possible to use that names easily to host/access http(s) services.
The names are the kind of servers that listen on those ports (by default) like ssh, telnet, http, and smtp. They are not subdomains or for URI parsing.
Well, the entire context of this is https so anything else is immaterial. The only reason it would be gopher is if you didn't read the post or don't understand the basics of https.
Sounds like you need an AI agent that can determine whether http and gopher are the same protocol
As bandie pointed out, you‘re explicitly making a http request. Duh.
nc is for generic connections and handles it well.
i know, but the OP's goal was to host/access http(s) services with names and avoid port numbers, and gopher service name was chosen by me as an example. my point was that /etc/services cannot be used for the OP's need.
if you host an http(s) service on port 11111 you can reach it with url http://127.1:11111, but url http://127.1:vce/ would not work in most software.
$ grep 11111 /etc/services vce 11111/udp # Viral Computing Environment (VCE) vce 11111/tcp # Viral Computing Environment (VCE)If I curl my phone number it doesn't connect, that's strange
But if you want to contact vce, why use „http“? It‘s not going to work
Try http://127.0.0.1:hkp instead of http://127.0.0.1:11371 for an OpenPGP HTTP keyserver. HTTP will work, but using the service name won't. Does that make what they're trying to say clearer?
That would mean not being able to vibe code up an entire app to deal with something as insurmountable as looking at a list of numbers and post it on HN for those sweet, sweet upvotes. Why would they not do that.
Perhaps we could even make the file the port itself, perhaps calling it a “socket”? A “unix socket” would be a great name. If we could place all these files behind a local reverse proxy then we could use localhost/jekyll or localhost/fastapi. It’s just a dream
Sure, but they are running web-apps they've vibe-coded (hence the .vibe tld) and for that use-case of many web apps that I run in docker containers I use nginx-proxy [0]. All the container needs is a VIRTUAL_HOST environment variable with the domain and what my router needs is an address entry for the wildcard subdomains. I even have nginx-proxy on a internet-accessible staging server.
If the port number space was bigger, I wonder if we would have gotten a global naming service (ala DNS) for unique service names.
You can still publish port numbers along with addresses in DNS though (SRV records).
You have to be root to edit /etc/services ...
I am pretty convinced you need root on most systems to update DNS resolution mechanism system-wide (eg. to edit /etc/hosts or run a local DNS server and put that into /etc/resolv.conf).
Technically you can set the HOSTALIASES variable to point to a custom hosts file, but that only works with programs that use gethostbyname(3). (Which is most of them? IDK.)
Not modern enough. Unix is too low level, antiquated, and discriminates against those who just want to get shit done instead of reading manpages or documentation by hand.
This is the best example of Poe’s Law I’ve ever seen. Well done…?
Top reply, and clearly based on the article's title rather than its content, as are the follow-ups. You're making this site worse.
The article is short; go read it then come back and delete.
The article is about the dude not knowing what service is where so he codes a json mapping. He could just update his /etc/services for the same thing. Oh but wait, he mentioned ai agents that changes everything!
Sounds like more of a problem with the title than the person you're attempting to insult.
What about identifying different instances of the same service?
The best way to find the right answer on the Internet is to slop-code a half-assed solution to a long-solved problem.
https://meta.wikimedia.org/wiki/Cunningham%27s_Law
Sidenote: A good AI would interject, Clippy-like, "It looks like you're trying to recreate /etc/services. Would you like me to explain what that is?"
URLs already have default ports for service names as a feature.
http:// means port 80 unless specified otherwise
https:// means port 443 unless specified otherwise
ftp:// means port 21 unless specified otherwise
sftp:// means port 22 unless specified otherwise
...
The practical solution for TFA is actually just an nginx server running on port 80 with proxy_pass
...location /blog/ { proxy_pass http://127.0.0.1:3000 ; } location /tensorboard/ { proxy_pass http://127.0.0.1:6006; }How many little web servers work without issue when their root page is loaded from a path other than /?
If that's your concern you can also do this
HTTP 1.1 and later will have the browser supply the domain name that was used to access the site, and even though *.localhost all resolve to 127.0.0.1, nginx will pluck out the correct configuration and proxy_pass the correct one.server { listen 80; server_name "tensorboard.localhost"; location / { proxy_pass http://127.0.0.1:6006; } } server { listen 80; server_name "blog.localhost"; location / { proxy_pass http://127.0.0.1:3000; } }
That's because there defines in etc services (really the place where etc services gets its mapping). You're putting the cart in front of the horse
This is exact problem I see with all of those vibe coded software: In few years everything will be super fragmented, everyone will be using their own set of tools, or vibe coding them, themselves. Communication between teams or even between team members will become very hard because of those differences. 'What do you mean production is down? On my vibe coded dashboard everything is green!'
It’s the Lisp curse again.
“[X] is so powerful that problems which are technical issues in other programming languages are social issues in [X].”
— <https://www.winestockwebdesign.com/Essays/Lisp_Curse.html#ma...>
Why do people always assume that change is permanent?
It's never.
After decentralisation we always see decentralisation. After a period of growth, a decline will follow. After the vibe coding hype, consolidation will follow. After rain comes sunshine.
> It's like someone should make a file... maybe in /etc ... and put short names for services in it... maybe it could be called /etc/services...
People shit-talk container orchestration systems like Kubernetes, but if anything they greatly simplified (if not completely eliminated) the need for this sort of network bookkeeping.
You forgot the /s at the end.
All our bookkeeping is now in YAML. Watch the spaces on your way out the door.
Learning nixos had been a lot of fun for me.
Your comment unironically is something I prefer and one of my biggest pain points with Linux.
As a newb, I'm sure there's something called with a mycommonproblemd name that has a stateful interface. But sometimes that all adds up to make things feel complex. And it let's me make stupid mistakes, like I forgot to close or open a port on firewalld, or I disabled a container but forgot to commit a change to my systemd units.
It's nice to just have a nice file called myservice.nix that tracks the firewall port, name, systemd startup and update scripts.
And don’t forget to quote your port assignments and version strings.
Hacker News loves to make snarky comments about everything to do with K8s and YAML all the time, and yet in my experience the amount of times an issue was caused by actual YAML can be counted on one hand.
Way more often it’s developers who can’t figure out that their http library only supports 2 concurrent connections, or emit garbage/malformed log lines and then bitch that they can’t see their app logs because we dropped them, or can’t be fucked to do “kubectl describe” in their own developer namespace that they have full permission for.
If you truly experience issues with just using YAML then you need to skill up probably.
Most of the issues with YAML are really issues with people who think that since "configuration as code" is good, that "code as configuration" must also be good.
No, go ahead. Tell me how just using /etc/services does what this does. Because I'm calling bullshit.
But go ahead. /etc/services, please, share with me how it's setup to do thing likes create the HTPS and makes it trusted and sets up the domain. Go ahead.
Go ahead. You can ONLY use /etc/services.
Or, you are admit you don't actually have a clue as to what /etc/services does.
The file /etc/services maps names to port numbers, like /etc/hosts does for hosts.
E.g. "telnet localhost ssh" takes you to port 22 (not the default 23 for telnet). This works because /etc/services maps "ssh" to "22".
If you're sick of remembering port numbers, create some entries in your /etc/services.
Of course, only programs which use getservbyname to resolve port numbers will accept your names.
maintaining services files.
i dont know why people keep insiting on that file while there are perfectly fine commands to pull from your boxes what is holding what port.
that is all besides the point though if you look at what you should be doing and keeping all this information in some kind of asset management system from which you can deploy things (which is kinda what k8s and docker etc. try to do (miserably)).
unless you are binding stuff to random ports on random boxes there is no need to do any of it at runtime and you can just consult your bookkeeping (for which etc services lacks a lot of details to use...)
There is no need to come up with "local TLDs" like .vibe, .local, .test and so on -- there is already an industry convention! macOS and most Linux distros support subdomains of localhost, so <anything>.localhost works. You still need the reverse proxy to do the host->port mapping, but you save yourself local DNS fiddling.
Portless is a great tool for this! I use it for all my apps now. Zero config + built in HTTPS locally.
Example from the website:
- "dev": "next dev" # http://localhost:3000
+ "dev": "portless myapp next dev" # https://myapp.localhost
> There is no need to come up with "local TLDs" like .vibe, .local, .test and so on -- there is already an industry convention! macOS and most Linux distros support subdomains of localhost, so <anything>.localhost works.
That would work if your goal was to route traffic to localhost.
What if it isn't?
There are reasons why the likes of example.com exists.
From the article:
> So I built local.vibe — a friendly dashboard and local .vibe hostname for every local web app on your Mac. No more localhost:3000 vs localhost:5173 roulette.
> The whole thing communicates over a Unix socket acting as a reverse proxy. No external services, no accounts, no telemetry.
We’re discussing a tool that is designed for – and is only capable of – routing traffic to localhost. It’s perfectly reasonable to point out that there’s an easier solution for this use case.
It looks like this will win: https://en.wikipedia.org/wiki/.internal
example.com, and the reserved TLD ".example", exist for technical documentation and writing. If you are writing a comment on HN, or a curriculum for a networking class, then you can discuss "foo.example.com connects to bar.example.com" or "Let's hypothesize about two offices called accounts.example and human-resources.example"
The "example" domains are never supposed to reflect anything that is actually deployed onto LANs, or test labs, or the Internet, current situation notwithstanding.
https://en.wikipedia.org/wiki/.example
There are, likewise, IPv4 and IPv6 ranges that are reserved to be used in documentation. Not the 192.168.0.0/24 or 10.0.0.0/8, but separate ranges that writers only write about, and are never deployed, not even in private.
localhost is only ever going to be the loopback interface, never across a network: https://en.wikipedia.org/wiki/.localhost#Conventional_use
See also: https://en.wikipedia.org/wiki/.test
The latter article lists foreign-language TLDs which serve the same purpose.
Some proposals are described here: https://en.wikipedia.org/wiki/.home
And there's also .local for mDNS on local network!
I've also come across projects using a public DNS record that points to 127.0.0.1 (something like localtest.me?). IMO that's way worse than using .localhost since you're trusting some rando not to change the DNS records and exfiltrate your meant-to-be-local traffic.
I did not mention .local, because it is covered in the linked articles: a special-use TLD, reserved for a certain purpose. It has often happened that LAN admins try to name something under ".local" and configure a zone for it in their BIND server. But this is incorrect, because ".local" is already managed by the zeroconf/mDNS protocols. It is a special case; and that is what ".internal" seeks to rectify, by giving y'all a TLD that can be truly internal and truly a zone under DNS server control, whatever that looks like for you.
As for 127.0.0.0/8 in the public DNS: https://utcc.utoronto.ca/~cks/space/blog/sysadmin/HowNotToDo...
As for localnet and localhost in general:
https://utcc.utoronto.ca/~cks/space/blog/sysadmin/LocalhostI...
https://utcc.utoronto.ca/~cks/space/blog/web/LocalhostSurpri...
".vibe" is not a TLD. It is not a registered TLD; it is not a reserved name. It isn't a domain at all. Go ahead, do a WHOIS lookup. Anyone who attempts to use such gibberish, even in documentation, deserves to be rudely surprised, someday in the future.
I've used[0] `.local` to achieve something like this, by advertising service endpoints over mDNS.
Exposing random services to your local network is exactly what mDNS is for, I always thought it was a shame more dev tooling didn't do that.
[0]: https://github.com/andrewaylett/mod_bonjour is a fork of Apple's mod_bonjour, very much unloved of late I'm afraid.
I know it's mixing of layers, but I can't help but feel the IPv6 transition missed the boat when they didn't just get rid of ports in the process. They've changed so much else anyway.
Want to run another webserver instance or whatever on your computer? Get the OS to allocate a new IP for it. Ports be damned.
Could be implemented in a backwards compatible way by requiring all IPv6 TCP/UDP traffic to use a fixed port number.
ipv6 packet does not have any port field. ports are on the level of tcp and udp, and you don't have to use tcp or udp on top of ipv6. ipv4 packet does not have any port information as well.
tcp6 is a thing though, was created at the same time as ipv6, and it does have ports, along with udp6. But if you really want one ip per stream and just hardwire port 1 or something, it's not like IPv6 does anything to stand in the way of that. Mght have performance issues on some OS's binding thousands of IPs to one interface, but that's on them to fix. Bigger lift would be the APIs that would need to change to manage whole prefixes at a time instead of single IPs.
> ipv6 packet does not have any port field
Yes, that's why I said I know it was mixing of layers.
However ports are a layer violation in a strict sense, introduced as a workaround because there was no easy way to just add thousands of new IPs to a single host back in the IPv4 days. No need to continue a workaround that causes grief on a daily basis.
What do you mean? You’ll still have to use TCP or UDP over IPv6, and both of those protocols use ports. Nothing is stopping you from creating a transport protocol that doesn’t use ports if you want to, but that has nothing to do with the network layer.
I mean that to connect to a service you wouldn't need to know the port, the IPv6 address would be enough.
This is why I consider ports a layer violation of sorts. You never talk to a machine with TCP/UDP, you talk to a service on a machine. And so as it is the full address to the service isn't just the layer 3 address.
As I mentioned this would be especially interesting when hosting multiple services, same or different, on the same machine since there would be no port conflict.
Yeah, but I mean you just have to have something on the transport layer, you can't just encapsulate application layer into network skipping network, that's not how the network stack works.
Perhaps it's because I'm tired but I can't make sense of your objection.
As I said you could implement it by having TCP/UDP as is, just with a fixed port number. This wouldn't be unlike the myriad of other conventions that litter IPv6, such as using /64 for a host or ULA's having a certain prefix.
Conceptually its doable on linux and ipv6. Have the listening program sit on that default port of 80.
Something involving socat, an any-IP / TCP routing rule, a VPS or other machine with a ipv6 /64 and plenty of duct-tape.
You'd get an application sitting on port 80 accessible via some unique ipv6 address (in the /64) on a tcp port 80. They needn't be the same port number but it would make it easier.
This is really amusing. Because absolutly eveyrone I know vibe-coded this thing. I think the first one was https://github.com/ralt/dns-to-port possibly mine is second (but it is very much a joke so I am not linking it) .. Vercel later did https://github.com/vercel-labs/portless and I would imagine quite a few others. Zeitgeist. Also this does probably mean its kinda useful.
And that none of us can get bothered to "Google" if a thing that does the same thing already exists. Currently vibing, on a train, a spaced repetition thing for my kid - because I needed a specific list of countries - and its faster to create the whole app rather than figure-out how to find one that would do this.
> a spaced repetition thing
anki?...
I do exactly this for the half dozen or so ports that I do care to remember.
Wow ... judging from all of these harsh comments, it seems the flood from Reddit to HN is going "well". I cannot believe all of the negative comments around the development of a piece of handy software.
I mean, it’s the shitty human nature. You can't create anything publicly without people bashing your project, and you, for the sheer insolence of creating a piece of software.
I like localias[1] for this problem. Not only do you get nice aliases for all your local ports, but you also get nice Caddy-managed TLS certs for them
I think this product demonstrates the atrophying of thought that results from too much LLM usage: design was obviously a long back-and-forth with a sycophantic LLM.
I find out what all my local servers are by `cat /etc/hosts`, because I put them in there. They run using an entry in the nginx config.
For short-lived stuff I don't even bother with that, I just use `whatever.localhost`.
If there was no LLM, author would have put a little more thought into this, maybe did a google search, and realised that all he needed were two shell scripts.
The more you use LLMs, the less you actually think
> The real annoyance is that it wasn’t just one machine. It was layers.
> I wanted a simple launcher for all the things that aren’t traditional desktop apps. Not Finder, Alfred or Raycast.
The entire damn article is like this - why would I trust software to run on my local machine when it was written by someone who did not even take care writing a blog post? How much care would they have possibly put into reviewing their vibe coded slop if they couldn't even bother to review their blog post?
> I put them in there
That seems to run orthogonal to this. The primary benefit I see here is not having to care at all what ports apps are actually starting on. Just run them, and access them by name. Same as a regular website on the internet where one doesn't care about the IP.
> How much care would they have possibly put into reviewing
Just enough to ensure that it works for them, which is what really matters. Others go in knowing that as well, and add/change that base to their own preference. That's the world we're now in.
> Just enough to ensure that it works for them, which is what really matters.
If that's what really mattered they wouldn't have posted an article they didn't write trying to get traction on a product they didn't create from a userbase that doesn't need it.
Nah people will be people, and a part of being a person is wanting approval from other people for something. And there will always be at least a few appreciative members of said userbase: I'm one of them.
Doesn't matter if they didn't actually write the code, but they put effort into refining an idea for a problem into a solution that fit their needs, and in sharing they've given those who never thought of it a base they can work from if they want or just go make something similar from scratch.
That was my exact post a couple days ago: https://news.ycombinator.com/item?id=47936315 (didn't get much traction)
I couldn't agree more. It seems so odd to have a HN submission for some vibe coded little "help-me tool" that is not clearly even needed. I wouldn't even say anything except the whole "vibe" thing is all over the article. It's just all a bit much. It's just sad.
There's a simple method you can use with nginx and /etc/hosts, I wrote about it couple days ago [0]. I used it for an internal demo recently and realized that a new breed of devs have never seen a non localhost url run locally.
[0]: https://idiallo.com/blog/say-no-to-localhost3000-use-custom-...
Custom domain is nice if you're planning on a real host later, but nowadays you can just use the .localhost domain and skip the whole /etc/hosts editing thing.
I essentially do this.
Super simple. (although I use rewrites at my dns layer for the whole local lan, but whatever)
It also solves issues my password manager has with multiple services on the one host but with different ports, but putting each on their own 2nd level domain.
I've built this twice before. The main problem that I hit is that the AI agents suck at the process lifecycle management: leaving processes alive, starting the same daemon multiple times, etc.
From a brief glance over the code I like the approaches I see. Using the `/etc/resolver/` mechanism is a new trick to me!
The interesting part to me isn't the port numbers, it's the automatic service start/stop, including idle route shutdown.
Sounds similar to Vercel‘s portless CLI (https://portless.sh/)
Why not resolve everything with UNIX sockets instead, that way you can have them named and scoped instead, hiding behind port 443, since it's mosly HTTP anyway.
Does this work in the browser? How will paths to different resources used by the web app work?
works with curl, maybe there is a case to either build a proxy for UDS and expose them to a browser, or open a request ticket to browser maintainers to support UDS
I wonder why not use nginx and some local DNS settings to just serve all these local services under a new, local URL.
Not too long ago I had a similar issue and solved with that.
I did the same using caddy for ease of getting https certificates
I mean, that's essentially what he's recreating here it looks like
A few other solutions / notes:
Your browser might be able find what you're looking for if you just type in the page title
Your dev stuff could just open the browser automatically when you start it
*.localhost is a Secure Context in Chrome and Firefox
This is literally what mDNS is for. Didn't even know that it was a thing until I needed it for some custom firmware I was writing recently. It's like DNS but also has service port advertisements.
You might also use another "localhost", the whole 127.0.0.0/8 is reserved for them. macos is a pain there because it only binds 127.0.0.1 to the local interface but you can add more manually.
This is a valid concern, certainly. I use kube for most things so it's not a problem, but my homeserver and its apps run on quadlets that I manage. In my case, I just added a README.md in the server account folder that each project's CLAUDE.md or whatever is configured to read. Then it selects a port and sticks that in the document and to be honest I have a few tens of services and it works. Haha, a direct replacement of machine for my own process.
Don't use localhost:3000, use your own custom domain[0]
0. https://idiallo.com/blog/say-no-to-localhost3000-use-custom-...
Vercel’s portless is a great alternative, but unfortunately it doesn’t work well with oauth flows. I’ve built portmap[0] to solve that. Also comes with skills which makes it work really great with coding agents (instructions in the readme).
Alternative https://github.com/peterldowns/localias
Granted no fancy UI to start and stop things but is it really needed?
Tbh this is not a single binary you need dnsmasq go and other things
I think about a decade ago pow did something similar, but using the .test domain, and perhaps ruby specific
I created something similar to help me spin up complex apps in multiple worktrees with full port orchestration: https://outport.dev/
Not the same, but omeone recently posted this "port" tool here on HN: https://github.com/raskrebs/sonar
Cool project. Just yesterday I looked at https://portless.sh/ which is vercels take.
I have a strange short-term memory that immediately forgets where I was. I think it just decides I made my decision and no longer needed that trivia clouding the next thought. This is exactly what I have struggled with. Thanks a ton for this!!
I thought that was the reason ‘lsof -i -P’ existed
I use Cloudflare Tunnel so most of the products I build are exposed and listed there. I just add comments for those that aren't exposed (eg. browser extension dev port) to that file too. A single doc means coding agents know to look there and keep it updated too.
*.localhost works btw
I use subdomains on an OVH VPS, since I want to access the services outside the network, so I can use freshrss.mydomain.com. But anything that can rationalize port number sprawl is welcome.
Interesting.
I've been wanting something like this for local Dev, but I think more:
Per user DNS.
So if the process doing the lookup is my own then redirect to the named service.
Huh when I start a service in dev, I just click on the link in the terminal to visit the url. What is even the problem?
Aspire.dev should be mentioned here.
I hate these signs of LLM generated texts so much!!
> The real annoyance is that it wasn’t just one machine. It was layers.
I use the tailscale services feature for this, added benefit is I get https.
People will do anything to avoid learning the basics. Nothing new.
Do you have a link for me? I'd like to learn them, but find it really hard to find them.
For example, I had never heard about etc/services until 2 minutes ago
Check out this book:
Unix and Linux System Administration Handbook by Evi Nemeth et al.
I'm pretty sure Arch wiki is worth reading, too.
I'm slightly annoyed that vite's default port isn't 8483
why?
VITE typed on a T9 keyboard is 8483.
5173 spells Vite...
173 looks like ITE
5 in roman numerals is V.
T9 is predictive and based on a dictionary and training.
If you type "8483" on T9, your phone may offer "THUD" or "TITE" or all three, as choices.
But with a normal telephone keypad, if you dial, e.g. "(800) 555-VITE" then you will always dial "8483".
https://en.wikipedia.org/wiki/Phoneword
Also, a service port is always qualified by its protocol. There are separate port namespaces for each IP protocol that uses ports. "8483" is not a service port, until you spell it out:
or8483/tcp
or8483/udp
or8483/sctp
etc.8483/dccpA TCP stream, for example, consists of a tuple:
src:port1 dst:port2
What is the benefit of using HTTPS for this particular use case?
Some browser functions only work over https, localhost is the exception. So if you change localhost:5173 to myapp.vibe it needs a valid certificate.
And localhost being the exception is often quite painful - I've stuck into several projects that worked just fine on localhost, and then were a pain in the neck to convert to run in secure contexts
This project is essentially "give me some metadata & a command which takes env $PORT, and I'll handle the rest". Which is neat!
I am also sick of handling port numbers - I end up allocating them on a schema to different services, so for testing I can spool any VM/service combination and avoid crossover. But if I want the same service twice, ah...
It always fascinated me that ports don't have any kind of textual resolver, so you can bind to `:1234` and also say "please also accept `:foobar`". But that would itself require some kind of "port resolver" on a device, and that's another service to break and fix :)
There is /etc/services to map port numbers to service names, and using getportbyname() to resolve port numbers.
DNS for /etc/hosts and now vibe.local for /etc/services. What will they think of next!
SVCB DNS records
getservbyname(3)
Unrelated, but that site is unreadable to me... dark grey on black?
i have something like this too, currently a 60 line nodejs file
What I do is use a hash function to derive port number from service name.
Bind to Port 0
Nice. An instant disappointment that there's no Linux support, but adding it should be a quick prompt away.
It is funny, I just built something like this last week and named it "Network". Additionally it scans for any type of data packages arriving at the SonicWall and sees if they are approved by me or not. I am paranoid after using TP Link at home like a dumbass.
I am very out of the loop. What is wrong with TP Link? What are the risks with it?
Chinese spyware.
https://www.reddit.com/r/msp/comments/1pxe1zc/tplink_ban_pro...
https://www.pcmag.com/news/facing-router-ban-tp-link-tells-f...
https://www.nytimes.com/wirecutter/reviews/foreign-made-wi-f...
Just search for "TP Link ban", you will see a lot of news. I switched to SonicWall + Ubiquiti + my own monitoring software to be safe. I should've done it years ago, but I was lazy.
This is a neat approach. One thing I wonder about is how it handles services that use the same port number across different protocols (like 443 for both HTTPS and SMTPS). The /etc/services file approach has the same ambiguity, but at least it lists the protocol alongside the port. A lookup table that includes protocol would be more robust for mixed environments.