Linux 3.4 kernel released
kernelnewbies.orgWhat I thought was more interesting (Linus noted "Nothing really exciting happened since -rc7...") was the exchange between Linus and Peter Zijlstra, in particular this Linus eruption:
It's striking that his manner is blunt and over-general... and his technical content is also blunt and over-general. He's known for his pragmatic and effective engineering decisions.> And I *do* know that the real world simply isn't simple enough that we could > ever do a perfect job, so don't even try - instead aim for "understandable, > maintainable, and gets the main issues roughly right".Found this article citing Peter on NUMA scheduling in March:
Toward better NUMA scheduling http://lwn.net/Articles/486858/
Peter's cited email: http://lwn.net/Articles/486850/
Actually, in reviewing the data, it seems this issue has been around for a long time, and could be the root of Linus's irritation in the first place. Perhaps he feels no headway is being made for such a long time, even the smallest perceived misstep sets him off.
If nothing else, I would love to actually see some technically detailed papers come out of this that get to the root of the problem, propose solutions to it, and identify what of those solutions are being tested and when they are expected to be implemented. Nothing like a good argument between engineers to get a root cause rooted out and a solution in place.
not everyone agrees with linus. ingo molnar supported peter zijlstra. with facts. something linus always wants but usually is too busy delivering (ehem..).
"the result of these commits is: 24 files changed, 417 insertions(+), 975 deletions(-)
Most of the linecount win is due to the removal of the dysfunctional power scheduling - but even without that commit it's a simplification:
15 files changed, 415 insertions(+), 481 deletions(-)
while it lifts the historic limitations of the sched-domains approach and makes the code a whole lot more logical."
Your link is empty.
Works for me.
The X32 ABI is interesting. Who'd have thought this many years after 64 bit becoming mainstream, (and decades after it becoming mainstream on servers) something like this would be needed.
Anyone has a good argument why NOT to use this for every non memory intensive application? I always thought that using 64bit pointers was a waste of memory when not needed and this x32 abi to me seems fantastic since it keeps the several advantages of newer 64bit cpus (more registers for example as mentioned in the article).
Recently there was a HN thread [1] about the conservative garbage collection used in some 32bit VMs (the complaint started with Go), because the way their GC works is by scanning portions of process memory for 32 bit values that map to valid address space for that process. Those values are treated as pointers even if they're not pointers, so the data they point to is assumed to be used and is not garbage collected.
Architectures/ABIs with 64bit pointers have a much lower probability of arbitrary values mapping to active memory space for the process, so memory bloat caused by the conservative GC strategy is much more limited.
The x32 abi looks like it would suffer from this 32bit conservative garbage collection problem too. Short of changing the VM strategy, using 64 bit pointers is the only way to ensure that such VMs do not accumulate so much unused but not-garbage-collectable memory allocations.
If an application is not memory intensive, why would you care if there is 'waste'? Personally I wish everyone would just move to 64 bit all the time.
An application that uses 64-bit pointers but doesn't need 64-bits of virtual address space is wasting physical memory because your pointers are twice as large. The application would waste physical memory pages and memory bus traffic just to store 64-bit pointers with lots of unused zero bits like 0x00000000ffffffff.
> If an application is not memory intensive, why would you care if there is 'waste'?
Performance. What other reason do you need? (Using cache more efficiently makes programs run faster).
Because there is a finite amount of cache at each level.
'not memory intensive' is an ambiguous term. he meant 'does not use a lot of memory, but may access what memory it does use quite a bit'. you meant 'does not access memory often, and performance thus does not depend much on memory access latency'.
Ten not memory intensive applications, combined, can easily be memory intensive.
address space randomization
(i can dig out a post of (afair) solar designer that goes into more detail, but i think this should be clear as is)
seems like a great idea. toolchain support may be an issue for a while, especially if you link against libraries with a different ABI. one thing to keep in mind is that the performance gain from e.g. more registers may not be as big as you'd think, since the hardware already plays lots of tricks to get around i386's limitations (e.g. register renaming) and likely extracts a good bit of the available ILP.
Actually main line kernel support is one of the last things that's been needed to get this going. gcc 4.7, and the latest glibc and binutils have had support for a little while. Now it'll still be a while before distros start supporting it as a first class citizen but the pieces are now there.
> Who'd have thought [...]
Well, Donald Knuth for one. Here's a blog entry I wrote about Donald Knuth and how he asked for this a while back. I think it makes a lot of sense and I'm really glad to see this maturing!
http://blog.reverberate.org/2011/09/making-knuth-wish-come-t...
Yeah, I remember this one:
http://www-cs-faculty.stanford.edu/~uno/news08.html
Saving 32bits means a lot to him.
SGI's IRIX always had the ability to do since decades ago. Of course, like others have stated, the main reason is to conserve both memory (for pointers) and more importantly bus bandwidth (which is a mayor problem in MPI/SMP systems memory, especially in those IRIX supported with 128+ cpus in a NUMA configuration).
There are still many features where linux is just playing catch-up that commercial unix kernels had decades ago.
In fairness, x86-64 Linux has supported running 32-bit x86 processes from the start (likewise for PPC and PPC64). It just so happens that the difference between 32-bit and 64-bit CPU modes is quite significant on x86. So this new ABI for using 32 bit pointers while the CPU is in 64-bit mode is really Intel/AMD's "fault" more than Linux's - the 32-bit mode's register starvation is particularly crippling. (Actually, if anyone is to blame, it's Microsoft "fault" for tying consumer computing down to x86)
IRIX may have features that Linux lacks but this is not an example of one. Linux supports 32-bit binaries with 32-bit pointers that run on 64-bit kernels on the other 64-bit archs, like powerpc or sparc64.
This problem is really a quirk that results from the fact that AMD made significant changes to the arch(added 16 registers) when they invented amd64.
Yes Linux is the new old - IRIX had transparent hugepage support in 1993 Linux had it implemented this year.
Is the X32 ABI meant so that one can run X32 programs on an x64 kernel as well, or does one have to decide to use X32 once and for all and then compile everything with this ABI?
My understanding is that the X32 ABI exists only in x86-64 kernels, and requires relevant X32 userspace libraries.
Basically you can choose if you want your program to have only 4 GB address space and thus only consume 4 bytes of memory per pointer. This is a per-process choice. (but obviously requires special binaries)
"...making slow start suboptimal" caught my eye, but I didn't understand the rest. Is this for a specific circumstance or general? Did they find something better than slow start or did they break it, causing it to be suboptimal?
Is default support for dynamic graphics switching (~bumblebee) meant to be implemented as part of the kernel or the graphics driver?
Has the slow USB copy bug been fixed yet?
there is progress on this. the bulk of this bug (these bugs) should have been fixed one or three releases ago. there was a lwn article about it.
Wasn't this one supposed to have some Android kernel integration, too? Or maybe the next one.
Btrfs is getting the love.
Is it me or Linux kernel releases becoming more frequent?
It is you. Since 2005 (early days of 2.6), there has been four to five major releases per year. 3.4 is the second release for this year, so it seems to continue the trend almost perfectly.
They've been at a pretty steady pace since they decided to go all in on the "release early, release often" thing with 2.6. 3.0 is 2.6.40, after all.
The second number is changing more frequently now, instead of the third.
I'm confused. Is 3.x meant for a different class of system, or are distros like Ubuntu intentionally lagging behind in 2.6.x land?
The jump to 3.x was a more or less arbitrary decision made by Linus because the 2.6 version numbers were getting very large. Switching to 3.x should basically be the same as any other kernel upgrade, possibly with some additional issues with software that assumes things about the form of the kernel version number. There shouldn't be any distros holding back from upgrading; in fact Ubuntu 12.04 uses a 3.2 kernel.
Ubuntu 12.04 runs kernel 3.2.
Ugh, thanks. I forgot I'm still running 11.10. I should've checked around first.
Latest Ubuntu has kernel version 3.2. What distros are still on 2.6?
Debian Stable of course!
Even Wheezy probably isn't going to get a 3.X series kernel :(
Wheezy already has 3.2
Why so? I believe wheezy currently runs Linux 3.2.
My guess is that Wheezy will run either 3.2 or 3.3.
There is no 'major version jump' difference between the two. The reasoning was simply "2.40 is getting unwieldy as a number, so I'm going to start 3.0 here".