AMS-IX Breaks 4 Terabits per Second Barrier
ams-ix.net"... underlines AMS-IX’s leading position within the global Internet Exchange Point market."
Yeah, i.e. DE-CIX is constantly hitting the 4 Tbps barrier for quite some time now and is trending towards 5 Tbps.
And the gap will only increase if the proposed intelligence services act (Wiv20xx) is passed by parliament. The act allows the intelligence services to mass surveillance all electronic communication and forces all service providers (not just telecom providers) to pay for surveillance equipment.
Besides being morally wrong to mass surveillance everyone when the current act already allows the intelligence services to monitor the few thousand potential terrorists and spies, it would also hurt the Dutch economy. International companies would move their European cloud infrastucture to e.g. Germany and Dutch startups providing a communication service (i.e. almost any startup) would be less trusted by their users and run the risk of paying for expensive surveillance equipment.
If you are Dutch i recommend reading the reaction of Nederland ICT [1] to the proposed act.
[1] http://www.internetconsultatie.nl/wiv/reactie/828d2159-cf3c-...
The MP who proposed the law has lost support of his party recently (The Labor party) to keep on pushing the law in current form.
Also the CTIVD ,the organization that supervises the AIVD (The dutch NSA) has told the law isn't possible to implement in current form.
So the chance that it will pass it pretty small. Though they'll probably juggle around some words and try again so we should stay alert. Luckily it has gotten quite some media attention and people seem to be aware that the law is a bad idea.
They measure differently - DE-CIX uses a 15s measurement window from sFlow data, AMS-IX uses the traditional 5m window.
Full updated list is here: https://en.wikipedia.org/wiki/List_of_Internet_exchange_poin...
Well, to be fair, AMS-IX is located around the Amsterdam Area (a circle of around 30 km orso), while DE-CIX has locations in the US as well. I'm not familiar with the specifics, but based on these locations, I'd say AMS-IX is the biggest internet exchange serving europe.
https://www.de-cix.net/about/statistics/ shows statistics for Frankfurt of > 4GBit/s
> 4GBit/s is technically correct, however I think you meant > 4TBit/s.
Both DE-CIX and AMS-IX have exchanges in NYC now. Both have actually started branching out and building exchanges in many new cities. The traffic levels they're talking about are specific to both of their home markets (Amsterdam/Frankfurt).
DE-CIX has the same "metro" setup as AMS-IX as well, they're in a large number of locations https://www.de-cix.net/products-services/de-cix-frankfurt/ in the Frankfurt metro area.
The DE-CIX graph seems to be about the Frankfurt exchange. AMS-IX doesn't just serve the Amsterdam area, it's where many undersea cables from North America and the UK enter the European mainland.
I'm more surprised that 'List of Internet exchange points' isn't dominated by North American and Asian exchanges. Do they have a larger number of smaller ones?
As far as i can tell all of asia hates each other
My traffic in philippines would actually go out of the country and back in, occasionally via los angeles (like 500ms+) because the incumbent monopoly telco refuses to peer with any other isp, so if you don't use them your traffic is intentionally screwed [there is an IX there for small ISPs, the 99% market share one just doesn't peer there]
Aside from singapore, every other country has something approaching this level of fucked-ness - HK to CN traffic often goes via LA/seattle, TW to CN traffic often goes via LA/seattle, all of china telecom's peering links are oversubscribed to death anyway and fall over during peak hours, a lot of the SEA traffic i've seen traverses singapore or worse, even if it's entirely domestic bound
no, Amsterdam does not have the size amount of peering it does because of submarine cable landings, and DE-CIX is largest by bits exchanged in one single metro area (Frankfurt)
DE-CIX doesn't combine their stats from all exchagnes, as far as I know. AMS-IX also serves NYC, the Bay Area, Chicago, Hong Kong, and the Caribbean
The Dutch are better at marketing, it seems.
They never said they were the first to break this barrier (but I had to go back and check).
By way of historical comparison, consider that in 1992, the ULCC's transatlantic "fat pipe" was a 1.5Mbps circuit:
http://jam.ja.net/marketing/janet30years/images/gallery/grap...
Or that the total traffic served by the University of Bath's website across all of 1997 was 63MB:
https://wiki.bath.ac.uk/display/bucsha/Computing+Service+His...
Is... is that figure correct!? Conservatively, taking the number of requests at the start of the year, we get 6 million req/year.
6310241024 == 66060288 bytes. 66060288/6000000 == ~110bytes/request. That seems too small. The overhead of the HTTP request alone (without content) would be greater than that!
Something does seem funky. The website looked like this in 1997: http://web.archive.org/web/19970418234503id_/http://www.bath... [This page is 3,780 bytes.]
In fact, you can find the server stats from back then: http://web.archive.org/web/19970822145424/http://www.bath.ac...
This says that it transferred "3 599 Mbytes" and there were "728 506" requests. Interpreting "3 599" as 3.599 gives 4.94 bytes per request, which is absurd. It must be 3.6 GB, making each response just under 5 kB. This seems much more reasonable.
So the number on that page should probably be interpreted as 63 GB, which is reasonable if we assume the site became more popular later in the year, as the original source suggests (3.6 GB*12 = 43.2 GB, and the stats are from May).
Also notice the following year (1998) says 126 MBytes and in 1999, 197 GB. That's an order of magnitude jump!
Why would you interpret 3 599 Mbytes as 3.599, and not as 3599 Mbytes? 3599MB/728506 = 4.95 kB (assuming 1000 kB in an MB).
Because that seems to be what the person who came up with the "63 MB" figure has done.
it reads as if by order of magnitude jump you mean "megabyte to gigabyte" but (obviously, now that I point it out) that is 3 orders of magnitude jump!
I think he is correct. An order of magnitude change for (most) SI units is a change by a factor of 10, while it is a change by a factor of 1024 for bytes.
Yay. More peering at Internet Exchanges makes internet faster and cheaper. Europe has the lowest effective cost of bandwidth:
https://blog.cloudflare.com/the-relative-cost-of-bandwidth-a...
This is really interesting to see the 19h -> 24h increases in traffic. Most likely due to online streaming, this is a predictable sharp increases on the Hamburg and Munich POP[0] for DECIX.
[0] https://www.de-cix.net/about/statistics/ (scroll a bit down)
More likely due to the release of iOS9. A couple of other European Internet exchanges also peaked last night [0].
In the past few years Apple have been embracing public peering much more - according to PeeringDB they are at 37 locations with many having multiple 100G connections.
If you look at the weekly statistics, you see the bump everyday. If iOS is putting a concentrated load on the network from 20h to 24h, they could redesign a bit their update system to spread a bit more during the day...
It's not that easy.
The peak at that European time is due to people getting home to unrestricted Internet access - which means their device can phone home and update.
I don't believe in coincidences ;)
But Apple uses Akamai as their CDN.
I always assumed given its age and role that it would have the best peering bar none.
They have built their own CDN - http://blog.streamingmedia.com/2014/02/apple-building-cdn-so...
Those numbers seem very low to me, I feel like something doesn't add up. Indeed, 1 GBits FTTx offers from FAI are getting more and more widespread, so that would mean that the peak traffic is around only 4000 simultaneous users?
Usage vs capacity. I have Gbit service at home, but I'm not saturating the link 24/7.
Genuine question! Have you ever been able to saturate your connection with a single TCP stream or did it require multiple streams (possibly from multiple devices)?
How come we're still using bits-per-second and not bytes-per-second? Any reason other than being historical at this point?
Because the size of a byte isn't fixed. It is hardware dependent. There is no definitive standard defining what the size of a byte is.
I would be surprised if, practically speaking, 8bits <> 1 byte for 99.99% of all general applications. My feeling is that the .01% can do the math so the other 99.99% don't have to.
Yeah but specifically for networking, error-detecting and error-correcting codes can make a byte at the app level > 8 bits on the wire, transparently. The capacity of the hardware is independent of that, so they talk in bauds.
Data is send serially over a fiber or copper line, bit by bit, not per byte. Also because bytes can be compressed, bits cannot, so giving the throughput in bits-per-second is an absolute value.
Bytes can be compressed in sequence not in singularity (just as bits). I mean why don't we measure storage in bits and not bytes? Just seems unnecessarily confusing.
Actually, memory chips (NAND flash, DRAM) are usually specified in terms of bits, usually using a lowercase 'b' to denote 'bit'.
Online sellers have used this to their advantage, selling e.g. "1Gb" flash drives, with real 1 gigabit flash chips, which turn out to be ~128 megabytes.
Or 120 megabytes.
Even if the only reason is historical: why not, what advantage would bytes per second have?
How long does it take to transfer a gigabyte file? Sure we can do the math but why (considering the average person)? Maybe there should be a common core revolution in engineering :-)
I can't do the exact math for that without looking up up a bunch of stuff.
I don't how much I need to subtract for the multiple headers and other overhead, how fast the transfer rate gets to the maximum, how many packages get lost, what other factors of the congestion control algorithms might impact my transfer etc
I just divide by 10. How long will it take to download a 100MB file with a 100Mbps connection? Approximately 10 seconds. I know that the theoretical maximum is 8 seconds, but practical factors like flow control and all that more or less cancel out the 20% error.
Try this bandwidth calculator (it's in German, but I think it's not hard to understand)
You are missing my point: the raw bit rate of the network is not the rate of file transfers. There is a bunch of logic around the raw bits in the wire that make it mich more complicated than dividing by 8
Actually, the real point is that the average person doesn't need engineering precision which is my original point. Whether it's 60sec or 65sec to xfer a file doesn't matter to them. Engineers always like to argue precision to justify complexity and lose sight of the larger picture (eg. consumers/majority matter).
btw- how does expression it in bits v/s bytes help you with your desired calculations?
You brought up math. I never disputed that 100MB / 12.5MByte/s is slightly easier than 100MB / 100Mbit/s. I'm just saying that calculation, in both forms, gives a very inaccurate answer to how long a file transfer will take, which is what you brought up as the big reason.
An easier and better answer is what jdiez17 described: just divide by 10. I.e. 100Mbit/s link speed is approximate 10MB/s of data transfer.
All of that has nothing to do with bits vs bytes.
For example, the TCP throughput equation which can be expressed by the following http://www.mathcs.emory.edu/~cheung/Courses/558/Syllabus/07-... (depending on your degree of accuracy).
Ask Google:
100 Mbps in MBps. 250GB / 100 Mbps 100 Mbps * 0.8 in MBps
Thanks to the launch of iOS 9 :) Funny to see the big bump after 19:00
How is this a "barrier"?