I made a simple geolocation service
maxkostinevich.comI applaud your efforts but it's disingenuous to title this how you "made a geolocation service" when you are just reading a field in someone else's geolocation service.
Indeed. Using another service and being a wrapper around it is not all that impressive. He is also using workers which makes his solution expensive.
I run ifconfig.io which now gets just over a billion hits a day. It is basically an echo service, and it just parrots back what cloudflare tells it. But since I run it on linode, it costs me a whopping $40 dollars a month for 35 billion returns a month. Using workers would cost me several thousand dollars.
Your service has a response time of 20-30ms (from one of my servers, at least). So on top of everything else it's faster than OP too. Great example!
Thanks for sharing. This is a perfect example which proves lambda/serverless/cf worker are snake oils.
It's not.
If under 10 million requests per month, serverless will be cheaper and safer
Else if under 1 billion request per month, managed VPS will be cheaper
If over hundred of billions of requests per month, anything else than on premise is probably an error.
Then you have grey areas where you need to process.
But for most small project, you cannot beat the lambda prices. Especially if you expect bursts.
the issue with you calculation is that you are trying to save $40 a month by spending 100-500 person hours and screwing your code base.
But if I expect bursts, Lambda could potentially cost me a few thousand dollars in a day while my own instance could handle it fine for like $50. Even paying $50/mo over a few years would pay off in that case.
Exactly. developer time lot more expensive. most people want to look cool & do cool things so they drink these cool aids.
it sounds lot cooler to say that " i made geolocation service using serverless/lamba" than "i made geolocation service using boring monolithic technology". where the former takes 150 hours to get it right and painful to add/manage features. where the later is very easy to maintain and very predictable priced.
There's TONS of use cases for Lambdas where the operational costs (both in terms of time/energy/expertise and to lesser extent monetary) of running something persistent are much higher.
In my last gig, we ran plenty of services that would never need to scale and would likely never hit the point where Lambda became too expensive. Low-utilization services (Example for order submit, because we only ever got a couple hundred orders per day), Cron-type jobs (Admins could use Cloudwatch to monitor), Temporary fan-outs using queues. Anything that won't ever hit the millions-per-day level is a good candidate for something along these lines. Most of those services cost <$5/month.
I think there's situations where Lambdas make sense in spite of their pricing. I applied for a job on the Warren campaign and their tech stack relied on AWS lambdas. I think that makes sense for them. It's a short-lived endeavor that isn't going to be around next year. Spending a bit more to not have to deal with the headache of maintaining your own infra makes sense in scenarios like that.
If that’s your opinion I’d like to hear your rebuttal to this [1]. It’s hard to argue against actual numbers for an actual production app running at actual scale.
[1] https://www.troyhunt.com/serverless-to-the-max-doing-big-thi...
if you are spending 279 using complicated cloudflare worker/azure function setup, you could have gotten away with $100 month if you have choosen linode/droplets.
32 million requests can be easily served with $50/month heroku node.
I think you massively mis-read the article. $279/mo is what he would have spent. What he actually spends is under $1/mo. Sure a $50/mo Heroku box would do the trick but why spend $49/mo MORE when serverless is cheaper?
What I want to hear is how you argue serverless is “snake oil” when evidence says it’s massively cheaper even at scale. Less than $1/mo to serve 141m requests. How is that snake oil when even you say an equivalent “serverfull” solution costs 50x more?
Well, not exactly
1. You are saving $49 month by spending 250+ hours engineering hours to structure the application which is near impossible to extend/maintain.
2. The $49/month subsidy (free tier) you are getting is temporary and will eventually go away (think about google maps api price change).
3. 50x cost for serverfull solution is not a right number. lets say you have to handle 500M requests per month (around 200 QPS) the 50x becomes .000X.
4. If there is any sudden spike to popularity, you are screwed. you will end up paying several thousand $ per month.
But see, none of that is true. Everything you just said is misleading at best and completely false at worse. Especially the part about the free tier. This isn’t the 12 month free trial, this is the “free forever”. Sure prices can change but they can change for servers too. That’s no different, and you can’t waste time/effort trying to engineer a solution for a problem that doesn’t yet exist and may never exist.
I think I’ve heard enough to say you have no idea what you’re talking about and this conversation has become pointless.
There's a cost curve lambda excels at. For services handling under 100 requests a second it's cheaper than a cheap EC2 instances.
choosing EC2 was probably mistake, may be linode would have been better choice here.
Have you considered not including the entire bootstrap.css on the front page since you only use about 1% of it :)
I almost never do frontend work. I'll gladly take a PR if you want to make that one page faster and prettier. Though I don't want to make the build require yarn or other js/css compiler framework for a single page.
if you give me a github repo, ill see what I can do
I would assume the majority (99.9%) of the requests is actually API calls without delivering CSS.
Indeed. Only ~30k page views that trigger the google analytic script, so 99.9991% of requests have no html at all.
Exactly!. I was about to tell a similar story but yours is MUCH better. I run an internal service that just loads (a filtered version of) the maxmind geolite2 db in memory using it's great C library bindings for perl and on top of that I fork (so memory is shared) a mojo server and it's able to serve millons of reqs per hour on a very small vm. Response time is not much more than a "hello world"...
Hey I use ifconfig.io all the time! Thanks for the handy service. Really cool that you are able to handle that much traffic for so little $.
Sorry silly question, still learning. Can you talk about how your linode setup is? Do you report back the same fields?
It's just a normal linode running archlinux. There are very minimal tweaks to handle the new connection load.
I should write a blog post about it. It has steadily gained more traffic over the years and hasn't need much care and feeding.
The go code can be found here if you're interested, though it some of the ugliest code I've written. I made it when ifconfig.me was having load issues many years ago.
I love the simplicity. I'm sure I could find similar source code for similar services that is way over engineered and doesn't perform nearly as well. Do you serve this with a reverse proxy in front, or just as is? I'm assuming as is since you have the TLS configured right in the code, but I figure I'd ask.
Having a reverse proxy on the same machine effectively doubles the amount of connections and requires coping the request around. There is no practical benefit to a reverse proxy for this use case. So the go program is listening directly on the internet.
I do have the service behind CloudFlare, which is essentially a reverse proxy. The reason is for CloudFlare is non-obvious; it is deserves a blog post, and that is connection pooling.
If I have all the requests go back to the origin, the bottle neck is not the go code, but Linux opening and closing all those single use TCP sessions. CloudFlare creates around 100k persistent connections to the backend, but then just keeps them open. This makes Linux much happier.
The high connection count actually made the service unstable once: https://github.com/georgyo/ifconfig.io/issues/2
Great service, thanks for running it!
He could have used the CF cache API in the worker, which would drop the cost quite a lot.
How would it reduce the cost? They are only return data from inside the header of the request they received and doing (an unnecessary) lookup to turn the country code into a country name.
What would you cache it cheaper?
No matter what his worker needs the smallest time/memory slot there is. You cannot make it cheaper while still using workers.
plus clickbaity title in disguise. writing 2req/sec would show that it's slower than my fridge.
My fridge can only handle a single request at any given time.
Almost all geolocation services read a field from MaxMind's db file.
It's a nice solution. Why log into your alt account to crap on it?
I agree with you. Thanks for being supportive - it speaks very well of you. That said though, hackers are hackers because we think like this.
I'm trying to learn to take it as the highest form of compliment - when hackers care enough to hack on one of my ideas, that means it has to be cool.
Google App Engine had this quite a while ago (almost a decade, I think)and a very generous free tier. Also has approx lat-long, city, state and country.
https://blip.runway7.net https://github.com/runway7/blip
I lost track of the consumer apps I've used this on and still haven't received a $1+ bill.
Is there a good lat/lon to timezone database that lets you leverage this into a presumptive timezone offset from UTC and thus local time?
I built something similar for https://ipdata.co. The API will return a time object like this for any IP:
"time_zone": { "name": "America/Chicago", "abbr": "CDT", "offset": "-0500", "is_dst": true, "current_time": "2020-07-25T06:10:16.945136-05:00" }Do you need the lat/lon for this? Usually you'd just put in the nearest city.
Moment.js is probably the most prevalent time and date library.
Try https://blip.runway7.net/v2.beta – I've added unix timezone based on the country and state code. The ISO databases have timezones associated with the level 2 code, which roughly corresponds to a state inside a country.
Can’t you just ask the client which timezone it’s in? Every browser knows, and it will be a more accurate answer: the user might be on a foreign VPN or temporarily in another TZ, but not working on that timezone.
My application is not browser-based.
I’d like to set a system clock timezone via geoip only, without location lookup via wifi. On my more secure systems I have location services disabled and it’d be nice to have an accurate local time automatically.
I have a need for this too, and considering making a basic service for it.
https://blip.runway7.net/v2.beta should work for you, let me know if it doesn't?
I actually can’t remember :-/ will fish it out and set up a proper service, maybe paid.
Where is the source code for v2.beta? It’s not on your GitHub.
Hi!
I need to be able to send a lat/long/timestamp to get tinezone.
This is surprisingly difficult task if you need it to work really globally and accurately. You need specific, detailed and uptodate map data for it, and timezones change somewhere in the world all the time. Especially daylight savigs. In 99% of cases you need to find some workaround or limit your task to specific regions, and get something to work in 95% cases
Ah, I haven’t seen a dataset I could use for that. Everything on the links I’m putting up is based on the country and state information.
I’ve seen timezone maps drawn, so this information does exist somewhere, but one would have to draw polygons for each timezone and do coverage checks, which seems complicated. The time zone lines zig and zag a lot.
Since most CDNs are already doing GeoIP lookups (for request headers or log entries), you can leverage that to provide the data back in the response body via origin, worker or even CDN edge config.
Programmatically populating the response body, as in the Cloudflare worker example from the post, is better than going to the origin just to echo some headers back in the response. To me, something like Fastly's VCL config language is even simpler. It directly executes on every CDN edge node worldwide upon request.
For example, I just whipped this up on Fastly using VCL. It returns GeoIP as json data for your IP at the root path:
Or if you want a particular IP, just append it to the path:
http://geo.zombe.es/2a04:4e42:600::313
You could do the same via query params, headers, etc. Have URL endpoints that only return some of the data, and so forth.
The VCL syntax gets a little gross when you handle quoting strings and assembling json and testing if the string is empty, but it gets the job done.
Of course what you might want from GeoIP data may not be what you get. It's really kind of a useful kludge that gets treated sometimes as a panacea.
This dataset right now thinks that I'm about 5 miles east of my location, but when subnets are repurposed it could be much more significant. And the data sources are always changing, so who knows what it will think tomorrow.
Your service correctly returned the following data about my IP:
Is this what Fastly is thinking about my IP?"client": { "conn_speed": "broadband", "conn_type": "wifi", "proxy_description": "vpn", "proxy_type": "hosting"Yeah, that's coming from these four Fastly VCL variables:
conn_speed: https://developer.fastly.com/reference/vcl/variables/geoloca...
conn_type: https://developer.fastly.com/reference/vcl/variables/geoloca...
proxy_desc: https://developer.fastly.com/reference/vcl/variables/geoloca...
proxy_type: https://developer.fastly.com/reference/vcl/variables/geoloca...
conn_type is interesting to me, I'm not sure how you would distinguish wifi vs. wired based on HTTP header data.
I haven’t worked in the space in a while but I’d doubt it’s via anything like HTTP headers. From a total guess I’d look at packet inter frame gaps & jitter to imply client csmacd or l2 behavior. Maaaaybe MTU and TTLs to infer intermediate routed networks or devices like the tunnel. And of course various TCP options and behavior, like say timestamps and dsack, to fingerprint the client or intervening ip proxies.
There is a good chance it's just https://dev.maxmind.com/geoip/geoip2/geoip2-connection-type-...
it used to be, for years, thats the older stuffin the "geoip.<key>" namespace.
the stuff in the "client.geo.<key>" space is from a newer/better/higher-tier service (they say, i forget the name). also I think some of it is mixed in with other sources and some of the info is self-sourced.
Whatever it's using for conn_type, it's not accurate. I get "wifi" on all of my computers, wired or wireless.
What does the code for this look like?
The VCL for it is available at https://gist.github.com/simonkuhn/a380a6fa205db87a3625f26ad0...
vcl_recv and vcl_error are the important bits, the rest is VCL boilerplate from https://developer.fastly.com/learning/vcl/using/ for unused subroutines.
This one I made couple years ago (and haven’t checked in years) is still running for free thanks to Heroku and Cloudflare: https://github.com/jlxw/geoip
I (like many others it seems!) have also built a geolocation service however mine is built on top of MaxMind's DBs that are mentioned in the post. Its on a few boxes running OpenResty and now handles 130m+ requests/day. Was really fun to build!
It's forbidden by MaxMind ToS, these dudes changed the rules and forces you to abide to the new conditions when you download an update.
It's very borderline from them (at least ethically)
At the bottom of the page it credits the MaxMind GeoLite database. The GeoLite2 database is licensed as CC BY-SA 4.0. It's the GeoIP2 database that has stricter licensing.
As another reply states I publish that the source of data is indeed MaxMind. I agree ethically it doesn't quite sit right so thats why I don't charge for the service.
I didn't see anything in their ToS that would prohibit hosting an API as long as you enforce the same requirements as maxmind does?
I had to tackle this problem as well for a small project and found that I could just send a GET request to https://www.cloudflare.com/cdn-cgi/trace and parse the response with a simple regex: /loc=(\w*)$/gm
The response time averages 10ms and is free, granted I'm not sure if there is a ToS or anything attached to this endpoint, I've only found non-conclusive discussion here https://community.cloudflare.com/t/what-are-the-terms-of-use...
I mean, you can do that but it's a hideous hack and we might change that output at any time and you could easily hit DDoS mitigation or other security features if you do that. That endpoint is used for debugging and is in no way an API.
I'd advise you to just create a Cloudflare Worker if you want do this. You'll get more reliability etc. and Workers have a very generous free tier.
It is worth mentioning that /cdn-cgi/trace doesn't only work under www.cloudflare.com. It works for any https?://DOMAIN_WITH_CLOUDFLARE/cdn-cgi/trace . It is an easy to determine whether a website is using CloudFlare or not too.
Well that's a bit more than 2 requests per second on average. Sure you may have some bursts once in a while, but nothing a 3€ VM can't manage IMHO.
I always find it funny when people brag about cost savings on cloud services while overlooking the insane premium they're paying by using the "cloud" to begin with.
It's kinda like a construction firm using a supercar to haul materials and then bragging about their efforts to make cosmetic repairs cost-effective... maybe you could just not use an overpriced car to begin with and then damage won't be an issue?
Am I missing something? Isn't a $3 VM referring to a VM service in a cloud somewhere? If you try to do the same thing without cloud you'll have to end up renting space in a datacenter, install your own servers, connect to an ISP, power bills, etc etc. Cloud definitely seems cheaper in comparison
GP and you aren't using the same definition of cloud. By cloud, they mean infrastuctures like AWS/GCP/Azure, not simple VM hosting (called VPS outside "clouds", and compute nodes in "clouds")
What's the definition of "cloud" then which excludes VPS from the umbrella of services offered in cloud? Not really sure how a VPS is any different from a Google Compute Engine
I think the main difference is the pricing structure. Usually "cloud" refers to the auto-scaling servers with the pay-as-you go pricing, while VPSs are usually manually instantiated and have a fixed monthly cost. Also, sometimes with VPSs you get more control over the server's settings than with "cloud" services.
So "cloud" is like renting a dynamically changing number of VPSs each month and which are usually more costly if your consumption is not spiky (eg. when a cheap VPS rented 24/7 is enough).
The main advantage of the cloud for me that it provides PaaS services. You don't have to administer a VM, etc, you just use the services and it's someone else's job to install security updates and stuff.
Installing updates and stuff is easy to automate.
And sometimes automatic updates break and then the site may be down until you find the problem which may require much reading and therefore a longer time if you don't deal with sysadmin stuff often.
With PaaS it's someone else's job who does this every day.
...so someone else can install the updates that break the server? Check your SLA, no cloud provider is going to pay you much compensation when the service goes down
If you stick to LTS releases and stay away from Oracle/Java then you are highly likely to never have OS updates break the application.
The only thing I like about lambda, is that your code could easily be deployed to different geo locations
You can also easily spin up a VM in a different geographic location.
If you spun up a VM in 200 locations like Cloudflare, that $3 VPS suddenly becomes $600
That's true, but likely not as close to the customer as CF Workers on edge. And certainly not as easy, as CF does all that automatically.
A Pi should be able to handle this too.
A Pi is basically yesteryear’s beefy rack server.
A lambda is potentially cheaper if there are long periods with no execution though?
Full marks to Cloudflare for engineering a radically effective and simple alternative to AWS Lambda (+API Gateway): It is simply a fantastic Serverless offering for low-latency network-bound workloads. For paying customers, they even thrown in freemium access to their globally distributed KV store and a forever-free Zonal Cache to sweeten the already good enough deal. That said, good luck with their Support team in case you discover undocumented limits (in production) like these [0].
I got sent a (frustratingly incorrect) bot reply to a ticket and a reminder that Enterprise / Business / Pro customers are priority (in that order) even though I pay for Workers. It has been an uphill battle to get someone to take a look at the ticket so far. Thankfully, we haven't gone to production yet, but as a consequence, now need to plan to add mitigation in scenarios where Workers blacks-out our traffic (but Support can't be of immediate help because "free customer").
[0] https://community.cloudflare.com/t/workers-and-sub-requests/...
> now need to plan to add mitigation in scenarios where Workers blacks-out our traffic (but Support can't be of immediate help because "free customer").
How is it that you prefer to engineer those workarounds instead of just paying $20/month? It surely can't be worth in engineering hours and ongoing maintenance of hacky workaround code?
> How is it that you prefer to engineer those workarounds instead of just paying $20/month?
If it isn't clear, there's an issue of trust here, not money. And frankly, a fallback shouldn't be much of an issue since fly.io, AWS Lambda@Edge are like-for-like swaps, so we'd still be "serverless" just on a different yet similar platform (albeit costlier).
The cheapest alternative has to be to ask the users where they are and store it on the user device? I never understood the need for geoip unless you ship spyware.
If the exact location is important geoip is not accurate enough anyway. Forwarding to regional sites automatically is just annoying when it doesn't work properly or someone is traveling abroad.
I can provide you a valid user centric use case: we have shops in different cities, we can preselect on the order form the closest one according to the city, if the customer chooses the pick up delivery method (faster than courier). We don't need to be more precise that this and asking the user defeats the purpose, it will have to take another action just to allow this. We don't do anything else with this information.
If it's just to route to regional sites, that's fine. Letting the user select just makes sense.
However, there's a lot of use cases that semi-accurate geo-locations make sense. The first of which is analytics. If I'm a marketing person at a SaaS company, I want to know where my customers physically are if possible for a variety of useful reasons.
Good quality estimates for location also help with security and compliance use cases as well. If a user logs in from a new country on the opposite of the world, you can flag that and take whatever action you want whether it's to block them, fire off an email to the person who owns the account or whatever.
I want to know where my customers physically are if possible for a variety of useful reasons.
I hope you're factoring in error rates. Because right now Google, with its billions of dollars to spend on geolocation, tells me that my laptop is in Albuquerque, and my phone is in Los Angeles. Neither are within 500 miles of either.
> If I'm a marketing person at a SaaS company, I want to know where my customers physically are if possible for a variety of useful reasons.
What reasons? What makes you feel entitled to that information if not volunteered?
Any info that can be gleaned from the request is fair game.
We have had clients that need it for content licensing requirements, it doesn't make sense to allow users to choose their region in a scenario where content is region dependent.
Of course, you can just get around this with a VPN.
> I never understood the need for geoip unless you ship spyware.
There are many many cases where it’s relevant. Others have mentioned prepopulating or ease of use aspects. But you’re making big assumptions that the client has an interactive user at all. It’s important where you don’t own the client or there is defined behavior, say at a protocol behavior, that must be respected. Moving earlier than that there are cases where you need to route traffic before any sort of connection is even established.
I suspect you’re also underestimating geolocation precision and accuracy. Free datasets will get you the right state or city say 90-95% of the time. Cheap datasets get you to the right town or post code 95-97% of the time. Expensive or bespoke multi source datasets will get you reasonable post code, neighborhood, or even household & address accuracy. Think of cross correlating order histories per physical address, device fingerprints, and IP addresses/triangulation.
Anything to do with mapping or local searches, geoip is very handy, save your users a few clicks and seconds when they first visit your site. One less dialog window to deal with.
since this article mentions maxmind, this reminded me of when geolocation by IP address goes horribly wrong, and non-technical persons interpret the results as something to be relied upon as factual:
https://arstechnica.com/tech-policy/2016/08/kansas-couple-su...
https://www.theguardian.com/technology/2016/aug/09/maxmind-m...
https://mashable.com/2016/08/11/ip-addresses-kansas/
ISP perspective here: Geolocation by granular /24 to /20 sized block of ipv4 space is often wildly inaccurate on a regional basis. It's entirely possible for the ARIN registration (used by maxmind) to be a street address in Seattle, but serving end user ISP customers a 4.5 hour drive away in a far eastern corner of WA state.
Also please see https://github.com/analogic/ipgeo daily actualized ip/country database with open license (shameless ad)
That would be a lot more appealing if it explained where the data is sourced from, included the update scripts, and how it is licensed - the only issue asking that question was closed without a word.
It likely uses the data published by RiRs. I wrote something similar that discusses where it sources the data and how to generate it. https://github.com/geoacumen/geoacumen-country
I've been meaning to make automatic Github releases for it.
The issue was closed with a commit adding the MIT license to the repo.
I had a geo-location service running in google app engine. for free. After pimping caching the daily traffic of 400.000 requests/day was in the free-tier.
https://www.united-coders.com/christian-harms/detailed-perfo...
I've used the free MaxMinds database for a while but since the last year I've been using iplist.cc. It supports IPv4, IPv6, shows if an IP is tor, spam, which ASN and a lot more.
It's also free and fast but with services like these I always wonder how long they manage to stay free.
Right. The first worry is "will this be here tomorrow?" The second is "how much more will they end up charging?"
I had good success with maxmind's free database. I can't find tooth results for accuracy but my sampling was pretty good. They also have an incentive to keep it.
Love the ingenuity and thank you for the performance comparison on Cloudflare Workers vs AWS Lambda. I personally wouldn’t consider what you built a Geolocation service, but glad it solved your use case!
Thanks!
Side note:
6 million requests per month is only 2.2 requests per second, a raspberry PI could do this (technically).
That's only true if you assume a even distribution. If you cluster them the scenario is a bit different (your conclusion might still hold anyway)
Even if all requests were clustered into one hour per day, it’s only 54 requests per second. Pretty sure a Pi would be up to the job.
Yes, HN hit my blog on a 3b+ and that was no issue.
Which blog out of curiosity and 3b requests in which period of time? Is your blog static? Thanks!
3b+ is a Raspberry Pi model.
Technically? It can easily handle thousands of requests a second.
Would they all be spaced out evenly though?
If the processing time is much below a second per request, requests per second might be more interesting than to stretch it out on a month.
I built a simple API using pure NGINX to get the IP address of a client, for times when I needed to ask a customer for his IP address; it was easier sending them a URL for a service I control, than explaining how they could get that information another way.
Now I have been thinking about opening it up for more people because while there are a variety of these services out there, one more does not hurt — it already exists, and works, so who knows, maybe more people would like to use it. The code is open, and access logs go to /dev/null; I could probably add a read-only SSH user for people to confirm that for themselves.
Geolocation could easily be added to it, but then comes my question: is there any use-case for geolocation APIs that does not involve tracking users for shitty purposes?
I was excited about adding this feature because I am using pure NGINX for it, and it was a fun learning experience, but I asked myself that question when I started writing the documentation for the website, and I still do not have an answer. Marketing material for other APIs that offer geolocation usually have user tracking as a selling point.
Personally, I have no use for geolocation, and if all use-cases involve tracking without consent users, and breaking their privacy, I want no part in that.
I have exactly one non-shady use case:
I work with an event org with a fairly common name, and another city in our region has an event with the same name. For some years, we’ve outranked them for major keywords and getting their confused customers was becoming a headache for us.
So now if our website thinks you’re in the other city, a little banner appears above the content asking if you’re looking for the other site, and offers the link.
This has saved us ~10 phone calls per day in season. Before adding this link we all got to the point of just doing their customer service for them and helping people buy tickets because it was less drama than trying to get them to call the right people.
While I agree that your use-case is not shady, I am not sure how I would work that in the marketing material for the website — maybe I just suck at writing marketing copy.
But in any case, your response does tell me that there are other things that can be done with that information, so maybe I should focus on the IP thing, and have the geolocation features kinda hidden away in the documentation pages.
>is there any use-case for geolocation APIs that does not involve tracking users for shitty purposes?
Use my GEOIP and longitude and adapt your CSS to show me your content in nightmode when it is night time in my Longitude. That would be cool.
Seems like the kind of thing I should not decide for an user.
When I build websites, I usually prefer using `prefers-color-scheme: dark` in CSS, so the user can decide.
Geolocation is a solid first order solution to comply with local data privacy laws.
So this finds the country based on some cloudflare-populated info of an incoming request (which sounds like it solved OP’s problem which is good), but if you want to use MaxMind’s database to find the country of any public IP I built the following a few months back:
TECHNIQUE: use the proprietary HTTP Request headers available through CDN/Cloud providers like Cloudflare Workers' cf-country [1], Amazon CloudFront's CloudFront-Viewer-Country [2], and Google App Engine's X-Appengine-Country/-Region/-City/... [3] to get client Geolocation data.
[1] https://developers.cloudflare.com/workers/reference/apis/req...
[2] https://docs.aws.amazon.com/AmazonCloudFront/latest/Develope...
[3] https://cloud.google.com/appengine/docs/standard/go/referenc...
Thank you - I was about to ask if this sort of information was available through other cloud providers.
One of the main challenges of building on Serverless platforms is rate-limiting.
There's nothing stopping a script-kiddie from thundering away at the Serverless endpoint resulting in an unexpected and quite high billables [0].
As for Workers specifically, Cloudflare's rate-limiting plan makes the whole thing 10x expensive at $10 for 2,000,000 good-requests [1] + $1 for 2,000,000 Workers requests. Other cloud providers I don't think fare any different.
[0] https://community.cloudflare.com/t/how-to-protect-cloudflare...
[1] https://support.cloudflare.com/hc/en-us/articles/11500027224...
I'm a bit confused. This looks like a few hundred lines of code to read a value from a hardcoded dictionary. Even as a proof of concept it would be more sensible to just add two numbers or something, at least that gives the impression that you could also make the API do something useful.
What are the runtime limitations that prevent the maxmind database from working inside cloudflare workers?
I was wondering the same.
MaxMind data accurate to the zip code level only about 30-60% of the time (as compared with what Google's Geolocation service will provide which is based off more data points than just IP address). Only use MaxMind if you're looking for region or country level accuracy.
Regarding latency of AWS Lambda:
> on average the response took somewhere between from 200ms to 500ms
I'm getting latency of 66ms to 126ms with some simple Java code running on AWS Lambda using provisioned concurrency. I find the latency is just fine for most use cases.
"using provisioned concurrency"
What's the point of using Lambda if you have to provision capacity for it?
You pay a premium on the compute because it's able to scale down to zero (so in theory, it works out cheaper anyway), but in reality it only works well when you do everything in your power to have it not scale down to zero.
For my use case, it is still much cheaper than having a dedicated VM.
Interesting, so basically you are making Cloudflare's geoip service public for free.
You can do this on appengine too, it provides the client location in a header.
How does MaxMind prevent someone from releasing an open source version of their database? If you are about to answer "Copyright", remember - you can't Copyright facts. This has been upheld in the Courts system many times.
Facts cannot be copyrighted, but a unique compilation of facts can be. This became settled law many decades ago with sports almanacs.
Furthermore, Maxmind adds non-factual entries to the database so they can identify it uniquely as their work. Map makers do something similar: https://en.wikipedia.org/wiki/Trap_street
MaxMind have two versions of their database, the free to use version called GeoLite and the paid version (whos name I'm not aware of).
I realise this doesn't directly answer your question but I guess one reason might be that they already provide a free version so releasing the paid version would just get you in strife?
Maxmind's database is updated very regularly and includes a lot of things like risk scoring. Those are absolutely protected.
You can just stick this in a license agreement, surely.
I've been trying to build open source MaxMind alternatives for a bit. (Mostly so I could distribute them with my open source projects)
IP to country is fairly easy and I open sourced all the scripts and the database itself [0].
But IP to city is much harder, I'm not actually sure it's viable for anyone to do that without relying on some other 3rd party service.
I'd be very interested to hear if anyone knows how to pull that off in an open sourceable manner.
A big part of why you can't freely distribute Maxmind's Lite databases is them trying to apply CCPA requirements.
That law will affect open source solutions just as much.
IANAL but Maxmind has to comply with CCPA I assume because they are sourcing data from California residents. If data is sourced in an aggregate or anonymous way, it wouldn't be subject to CCPA.
> Aggregate and anonymous data is exempt from the CCPA, unless it is in any way re-identifiable. https://www.cookiebot.com/en/what-is-ccpa/
Why would you need to depend on a third-party service and network being available for such a basic task? MaxMind provides their GeoLite database for free and it's extremely easy to embed it in your app.
Though a little bit off-topic, before the service http://freegeoip.net/json/ ceased, I used it a lot in testing. I built one myself with the simplified source code from https://github.com/voyagin/freegeoip since I want to minimize the response time. BTW, the listed repo isn't mine.
I run a hosted version on https://freegeoip.whoisbot.io but it looks like MaxMind changed their database url so it stopped working
Can you make a geolocation service from static files? 2^32 IP4 addresses with 2 floats per address would take only 34 gigabytes of storage. Put 256 addresses in a given file, and turn the other three octets into folders. E.g. https://example.com/192/168/0.txt would contain the location for addresses 192.168.0.0-192.168.0.255.
Would this be cheaper than running these services?
You could also generate all those files, put that into a GCS Bucket and put an API Gateway in front.
easier would be and thats just a thought: You could build a go/c++/java app which contains that database pre compressed in some arbitrary better format than just 2 floats, get it down from your original 34gig and just keep it in memory.
Like if you know that 99% of all ips in a certain range are from US, then only store the 1% as a list.
Sure, but my point was that it can be done with zero compute cost. There's nothing cheaper than serving static files from cloud storage.
That is just not true.
GCS for example and its the same with S3, costs space, egress and requests themself.
Data Storage 50 GB Standard Storage * $0.026 per GB $1.30 Network 10 GB egress * $0.12 per GB $1.20 Operations 10,000 Class A operations * $0.05 per 10,000 operations $0.05 Operations 50,000 Class B operations * $0.004 per 10,000 operations $0.02
MaxMind sells their databases as files as well as APIs. GeoLite is free to use but not as accurate as their paid products.
This only work’s if geocoder is accessed directly from browser. It will not if you need to geocode ip addresses received in other ways on the server side.
I guess it’s ok for a basic consumer website, although it’s not exactly equivalent to ip geocoder databases/services - they allow passing ip addresses as part of geocoding request.
You can do this for FREE https://freelancer.freelancercv.com/blog/9/find-website-visi...
Nice! Have you tried AWS Lambda@Edge?
and it looks like similar geolocation info "cloudfront-viewer-country" is available from AWS as well. So similar trick maybe possible with AWS https://docs.aws.amazon.com/AmazonCloudFront/latest/Develope...
it's right here in the worker example
https://developers.cloudflare.com/workers/templates/pages/co...
Slightly off-topic.
AFAIK, an unresolved problem is a proper geolocation service - at least for city level resolution - for mobile IP addresses. There are some services in this field (digital element), but they are very unreliable.
That country list doesn't have Kosovo (2008) or South Sudan (2011).
A somewhat related problem is that overseas regions (at least for France) are handled as different countries. It makes sense for a CDN, but for most other purposes when I use Internet when I'm on Réunion island my country is "FR", not "RE", since RE is not actually a country. It's also quite inconsistent, I've tried with several IPs from Réunion and some of them are FR while the others are RE.
It doesn't seem to have the same problem with Hawaii for some reason, they're all just US from what I can see.
Also the republic of north Macedonia is still in there by old name ;)
Kosovo is not universally recognised though, to be fair. But yes, I'd include it. Even if you are against in principle, it's de facto a state and treated as such even by many countries not recognising it in name.
But that would mean that your API should return a country specific result to be accurate.
Which is precisely what for example Mapbox does - they allow consumers to specify a worldview, which adjusts borders and other geographic features based on the specified culture: https://docs.mapbox.com/help/glossary/worldview/
They dont have Serbian one to resolve this issue. Or even more relevant ones like Russia (remember Crimea).
doing a geo based redirection was my first golang project that got deployed to prod.
Nginx + GeoIP2 module sets a HTTP request header and proxies the request to the golang app.
The go app does a lookup from redis, based a combination of country header and some url params and responds back with a redirect header.
Hosted it on a t2 medium in EC2 and I have seen it easily handle ~1500 requests / second without any issue.
All possibly criticisms aside, this is really good and useful. Great job friend!!! It's a beautiful day for hacking!! :)
I got this:
"Application error An error occurred in the application and your page could not be served. If you are the application owner, check your logs for details. You can do this from the Heroku CLI with the command heroku logs --tail"
This title is so misleading. I got to the end of the article and thought - wait this is it?!
i read the title and got interested, glad I skimmed. This is very misleading. You didn't "make" a service, you used someone elses.
Are you charging for this?
The code is available for free and you can host it on your own Cloudflare account. Cloudflare provides a free plan (up to 100k daily requests across all your Workers), and paid plan ($0.50 per 1m requests)
that two per second if you don't have a calculator handy.
tl;dr uses Cloudflares workers as API endpoint and returns Country name based on header in the CF request.