The Pixelbook is being used to test Google's Fuchsia OS
androidpolice.comAside from control, what does this provide over Linux?
IMO, the Linux desktop problem is due partially to relying on the antiquated X Windows system, but mostly a lack of funding a good vision, e.g. Unity.
After growing up with OpenLook, then Motif, the plethora of Linux window managers, and unhappy years with Windows, I noticed most of the elegant apps were being written for OSX. I’m not sure how much of this is due to the devs or the OS/libraries, but probably both.
The solution I want is a Linux OS, a WM with a good, cohesive, long term vision, and an easy way to build apps within at vision — something like a native Electron minus the memory and CPU overhead. I believe Google could do this.
Control is all of it.
Security on Android is a joke for 80%+ of users. They can't run on the latest version of Android, because various vendors' drivers are in out-of-tree kernel patches that are un-upstreamable for non-technical reasons.
(By comparison, Chrome OS is also Linux-based, but, IIRC, it requires all shipping devices to have drivers upstreamed.)
Owning an OS with a stable device driver ABI would allow Google to fix the Android fragmentation problem, and make sure all devices stay up-to-date ala Chrome OS.
But it's not just control, is it? Reading this sub-thread I'm not convinced some commenters have actually taken much of a technical look at Fuchsia. I'm not associated with the project but from what I understand there are plenty of technical merits:
1. Zircon, the kernel, is entirely capabilities based. No users, no groups. Those are higher abstractions built on the capabilities system. This is all around a win for security from executing 3rd party drivers to building a sandboxed multi-user system, not to mention is just generally more flexible and less error prone at the kernel boundaries and internally.
2. The WM is scene based. Applications draw to a scene which the WM composites and renders. This means a single draw list is sent to the GPU for each pass instead of many apps spamming their own updates. Also you can do cool things like have objects from one app cast shadows on another app.
3. The kernel API is not a bloated mess and will not become one. Keeping the number of syscalls on the order of 10s not 100s is a design goal. You can see the commitment to this in the way the project is structured. Which brings up:
4. Testability. The project was designed with testability in mind. Not Enterprise Java mock and verify the world shit. Just good practice separation of responsibility between the different layers and enforcement of a sane unidirectional dependency relationship from lower to higher layers. Good luck testing the Linux kernel.
5. The system will update from a single image rather than relying on N parties to pull new patches into their distro and update and test everything N times before releasing to people. Again layers.
This isn't all new by any means, but it's a pretty practical approach to an OS in 2017. This isn't an "academic microkernel", and also decidedly not *nix.
Enough evangelism. Point is, the project is not solely designed to give the big G more control over a platform. Not saying that is not one goal, but technically this thing checks out.
> Good luck testing the Linux kernel.
This particular line struck me more than the rest of your (very good) post. For those of us unfamiliar, can you recommend/reference critical resources on the testability of the Linux kernel?
AFAIK fuzzing is the state of the art when it comes to testing Linux: https://lwn.net/Articles/677764/
> un-upstreamable for non-technical reasons
It was my understanding that it was almost completely technical reason, i.e., vendors write drivers that mostly work but are completely terrible from a quality perspective.
Quality problems with code at this magnitude aren't really a technical problem (i.e. not something you can fix by asking the same programmers who created the problems to fix them.) It's rare that good programmers could write code as badly as these patches demonstrate. Even actively-harmful coding standards directed from the top down wouldn't cause this kind of code.
Instead, the kind of code that ends up in these driver blobs is caused by, in essence, a political problem: they simply hired bad programmers. The only way to fix that, is to demand that the driver vendors' management teams adopt higher standards for their software hires: that they fire many of the programmers they have, and hire new ones in a much more stringent process. And probably also pay them more, because that stringent process will likely choke their existing funnel out of existence.
It's much the same as, say, finding that a company is using a low-quality outsourcing firm. Would you say that there's a technical problem inside the outsourcing firm? No, you'd say that there's a management problem in the choice of outsourcing firm.
I don't know if this applies so much at the phone / laptop level, but for embedded devices, the example code / reference drivers are often terrible, but it's not entirely fair to blame the devs. This issue is usually not so much that they're bad programmers and more that they're good electronics engineers who, once the hardware is done, are the only people with the knowledge required to implement the drivers.
And the same vendors will suddenly write better drivers with a new unfamiliar OS?
No, but the drivers and kernel will be updatable independently, and the drivers will be sandboxed to their own process with limited permissions.
The quality of the user experience will be better as a driver crash won't bring down the device.
>Owning an OS with a stable device driver ABI
Are they actually doing that though? I haven't read anything along those lines, not that I have looked at Fuchsia in depth.
I haven't read this yet, but they have docs outlining driver development. Note that it is called a "driver development kit" which may imply it's more stable.
https://fuchsia.googlesource.com/zircon/+/HEAD/docs/ddk/over...
Wouldn't treble fix the Android fragmentation problem ? Why is this needed/better ?
It doesn't fix the underlying problem: kernel and drivers are still not updatable and will keep having security issues.
The two aren't mutually exclusive.
Treble attempts to fix fragmentation by re-standardizing and more loosely coupling between the driver layer and system APIs. We won't know if this is a solution until later and this XKCD[0] kind of explains why.
This is a real time OS from the ground up. Some have SPECULATED it might be run in a VM/container rather than directly on hardware, effectively further standardizing in a similar way to Treble but via a virtual machine's interface instead of a series of system APIs.
That would allow Google to ship Fuchsia VM/container images that could run in every single handset no matter the hardware, and therefore every Android device could be updated directly by Google at the same time.
Meaning Fuchsia may not replace Linux, it might run above Linux and therefore above Treble. That's why the two aren't mutually exclusive.
PS - This is how Microsoft's XBox One platform is engineered currently.
A bit pedantic but that XKCD doesn't apply if your standard is an overlay to another standard, for the most part.
USB2 didn't fragment the "USB standards " space, because USB1 devices work on it.
Treble doesn't overlay an existing standard in all cases, it completely replaces them in some areas.
I Disagree. I run Linux desktop on all my computers.
I think you don't understand how much Google works to have security from design. Windows, Linux, OS X can all be fine "traditional" desktop systems (though the lack of a unified vision on Linux hurts it incredibly).
I see Fuchsia as a desktop system that (a) has a native, fundamental concept of graphical desktop and (b) has deep sandboxing on a level similar to Chrome, but for the entire OS, meaning -> it could become an all purpose platform for thick client applications that isn't reliant or the web and still unable to be exploited.
I suspect they are not satisfied with any of their desktop options. I doubt this has anything to do with Android or Chrome OS within the next 10 years. They just want a desktop that sucks less and they can guarantee follows their own security practices (not FIPS, but process isolation and capability injection).
I completely agree with this. Google wants a desktop class OS that they have complete control over and the ability to implement whatever features they want without any friction or compromises. I also think they also want an OS that has great interoperability with Android so that they can do a lot of the things Apple is doing with iOS and Mac OS. The RTOS capability is another benefit as it'll allow Fuchsia to be used in areas where Android may not well be suited such as the OS for self driving cars.
Elementary is the OS you're looking for: https://elementary.io
I wish it had more adoption. Right now there are not a lot of apps written using it's UI guidelines/framework as most projects are worried about portability and/or have moved to Electron. It is slowly getting better though, and seems to be developing an ecosystem geared towards quality as OSX used to be: https://medium.com/elementaryos/appcenter-spotlight-2017-wra...
I’ve been using Elementary for a year straight[1] and like it a lot, but I see it as Ubuntu with the “right” UX.
(I also don’t see Linux developers flocking to it in droves, which is sad because it is a lot cleaner and easier to use than Gnome and KDE, IMHO, but then again Linux has always been more about diverging choices than unity—and I don’t mean the desktop environment here).
The only thing Elementary needs to do is allow for disabling all animations, to get rid of the perceived latency issues some folk complain about...
But I digress. It isn’t an OS, and you get all the Linux legacy underneath, so it’s understandable that Fuchsia is happening. Google seems to be attempting to lay a new foundation here, and I think it’s actually a good idea to do so, although the number of third-party packages and run times they’re bringing in makes it hard to peg it as “legacy-free”.
Time will tell, I guess.
Nothing is legacy free, at least not yet, we haven't figured that out yet. Everything you touch is future legacy. It is hubris to think one has solved this, or made something so good it will last for so long. The only legacy free thing is a void, but even it will be filled by an inferior solution in the future.
I am so interested in this! Are you willing to answer some questions?
1. What hardware are you running? 2. Do you pay for apps? 3. Are you able to use this as your primary machine? 4. What kind of work do you do?
1. A C720 Chromebook and an i7 desktop that is also a KVM host. 2. No. There isn’t anything I’d consider worth paying for in the store right now. 3 & 4. I work on Azure solutions, so these are my Linux machines. I do a lot of work on them, but need Office and Windows for the corporate bits, so my main desktop machine is... a Mac, and I carry a Surface to customers.
I wish more distros would start supplying packages for Pantheon. I'm not interested in a distro that forks Ubuntu LTS + specially patched packages (because Gnome3 refuses to merge changes that fix problems, but only really benefit non-Gnome3 desktops (completely against the spirit of open source and GNU + FDO)), but I would love it to be based on a distro that is kept up to date (such as Debian or Arch).
I actually like that it’s LTS underneath, because I know I can run the machine for a couple of years with stable packages and then upgrade to the next version.
I understand the appeal of more up-to-date stuff, but with Docker I can have my stable cake and swap toppings at will :)
Because operating systems and application cliches should be separate! Layers that slip smoothly past each other precisely via well defined protocols of interaction. Now() is an outdated concept if we are to build systems that can age with grace.
> because Gnome3 refuses to merge changes that fix problems, but only really benefit non-Gnome3 desktops (completely against the spirit of open source and GNU + FDO)
This is actually what I most like about Google's OS initiative. The fact we'll have ONE and ONLY ONE desktop environment and all the political infighting of the last 30 years won't matter any more.
That's all great and all, but why pick Gnome3 to rally behind? I literally know no one that can stand that desktop environment, and it has become part of this weird systemd/pulseaudio cult that is slowly destroying any chance of moving forward and producing a modern desktop environment for Linux that average people will accept, due to a constant (and ignorant) political war against pretty much anyone else that isn't part of that particular groupthink.
The irony here, I think, is Pantheon actually completed this goal with a much smaller team, with no political bullshit, and is a clearly superior product.
By virtue of this fact, and by your reasoning, Gnome3 has no reason to continue to exist at all and should end development if, truly, Linux is meant to have a One True Desktop Environment (tm).
Yeah I pretty much disregard any distro that's based on 16.04, too many ancient packages
I really wanted to love Elementary. But the UI lag and general clunkiness compared to OSX is just too much for me. I think the world can only benefit from new operating systems completely free of Linux influence.
Try installing Elementary Tweaks and turning off most animations (some, sadly, cannot be turned off yet, but I run it on an Acer C720 with 2GB of RAM, and it works beautifully).
I don't get the hype for elementary. It looks like an other gtk-themed os. I swear by Manjaro kde now, it looks gorgeous out of the box.
Capability based security and microkernel, both pretty substantial differences.
Also seems like Magenta doesn’t have users and groups as a first class construct. It looks like there are primitives to create them in user space, but the kernel only has one construct (Jobs) by which security is administered, rather than the many different security paradigms of a nix system (i.e. a process can have access based on what user it’s running as, uid not, SecComp control, etc). This makes security much easier to manage at the kernel level as all security granted to processes is explicitly granted rather than inferred, as is often the case in nix.
Honestly, I think the multi-user assumptions that *nix started with are largely irrelevant now. Most people don’t have multi-user needs on their devices (you can’t even do it on iOS), and even servers are moving towards single-user constructs with containers, etc. I think the an operating system built around a multi-user model will be viewed as the edge case in the coming decades rather than the norm.
Aside from the microkernel, doesn't Android already do capability based security?
No. It has a user-visible idea of "capabilities" in the sense that apps get a checklist of things they can and cannot access, but that's not "capability based security," just another access control list.
Capability based security like Fuchsia has means that there is no ambient authority, or in other words no singleton resources. No fopen(), no connect(), etc. Instead, processes access everything through file-handle-like objects that are given to them by their creator, which can thus be sandboxed/mocked/revoked/etc without anything extra like containers or jails or VMs.
I think this is a much lower level thing.
One feature that differentiates Linux from other systems is the sheer amount of choice that you get.
For example, with Linux I can choose GNOME, KDE, a selection of other desktop environments, or I can choose a simple window manager like twm and sort of roll my own environment. Or maybe I could set my system up so that it runs without a window manager and just gives me a full-screen emacs environment.
With Mac OS, for example, someone else made all of the choices and I have to live with them.
What Google is doing will probably end up getting pretty close to your vision, but I suspect that the end result will be very similar to Android - a system that's technically Linux at its core, but where all of the choices have been made for you.
This wide choice is actually putting Linux at disadvantage: instead of consolidating efforts to create one decent desktop environment we have half a dozen of half-assed ones.
God, I've just recently have to downgrade to Gnome 3.24, because the latest 3.26 version kept crashing with the segfault error few times a day.
It extends beyond desktop environments, that's only the most visible piece.
Closely related to desktop environments are the two major UI toolkits. Why does Kate have a different file picker than Gimp? Why does LibreOffice give me yet another file picker? On MacOS and Windows, the file picker is a solved problem.
Init systems, until fairly recently when most distributions consolidated around systemd, were in a similar state.
The current Wayland vs X situation is another example - some distributions prefer Wayland and its quirks, whereas others use X and its quirks. In this case, it seems pretty clear that Wayland is the winner and we're just waiting for everything else to catch up.
> Why does Kate have a different file picker than Gimp? Why does LibreOffice give me yet another file picker? On MacOS and Windows, the file picker is a solved problem.
Because these file pickers are implemented by two different GUI toolkits that have a different idea of how file picking should be done. This isn't anything new, Xaw programs used their own (handmade) file dialogs, Motif had its own, Java/Awt had its own, Java/Swing had its own, etc. Unless you want to force everyone use a single toolkit (which is unrealistic for several reasons) you cannot get the same behavior everywhere.
Note that this isn't specific to Linux, in Windows and probably macOS you get the same unless you stick to applications using only the native APIs and avoid any cross platform application that use Gtk, Qt, Swing or any other toolkit that cannot tie itself to a single native widget library (and TBH even with some applications that do tie themselves to Win32, they sometimes end up implementing their own file pickers anyway).
And of course file pickers tend to be the most shared of GUI elements, when it comes to the actual UIs themselves even in Windows you get a ton of different toolkits, styles, behavior, etc that totally ignore the native look and feel (assuming there is one since Win32, WinForms, WPF and UWP all behave from slightly to totally different from each other, depending which ones you compare).
> In this case, it seems pretty clear that Wayland is the winner and we're just waiting for everything else to catch up.
Wayland is very restrictive for many uses and actually lacks several useful features compared to X11 - some it pushes towards the applications in the stack (so instead of a single solution that is shared among -say- 1000 clients you get 1000 solutions), while other stuff are simply impossible (unless you are XWayland which gets special status, leading to the ironic situation that even under Wayland the X APIs provide you with more functionality :-P).
You're absolutely right that the problem isn't unique to Linux, it was just the easiest example I had on hand and was relevant to the discussion. I could just as easily fire up a variety of programs on my Mac that feature different file pickers and GUI conventions.
Windows, as you point out, is kind of thrashing around when it comes to what it wants its GUI to be. I don't think the situation will ever improve there as Microsoft is very reluctant to break backwards compatibility.
> Wayland is very restrictive for many uses and actually lacks several useful features compared to X11 - some it pushes towards the applications in the stack (so instead of a single solution that is shared among -say- 1000 clients you get 1000 solutions), while other stuff are simply impossible (unless you are XWayland which gets special status, leading to the ironic situation that even under Wayland the X APIs provide you with more functionality :-P).
I will admit that I haven't dug into Wayland very much, but this is my impression as well. I try it (the Wayland KDE on OpenSUSE Tumbleweed) periodically, but it seems a lot crashier than X.
> Windows, as you point out, is kind of thrashing around when it comes to what it wants its GUI to be. I don't think the situation will ever improve there as Microsoft is very reluctant to break backwards compatibility.
I don't think they need to break backwards compatibility, they just need to focus on one of their existing tech and try to make the other stuff behave similarly. There is no reason for example why they cannot provide new window classes (as in RegisterClass) that implement controls that look and behave similar to the UWP stuff - after all there are already a bunch of 3rd party "Metro/UWP style" components for other toolkits. It is just that none is "official".
They need to provide some unification (even if underneath things are less than ideal) and this needs to cover all the tech they've made so far - Win32, MFC (which can build on Win32), WinForms and WPF (and others, if i forget anything).
> I will admit that I haven't dug into Wayland very much, but this is my impression as well. I try it (the Wayland KDE on OpenSUSE Tumbleweed) periodically, but it seems a lot crashier than X.
The implementation aren't really a problem since they can be fixed, the issue is at the protocol level and even some of the goals/mindset that lead to it (e.g. the inability of clients to talk or share resources is considered a feature, but this is one of the thing that you need to implement reusable programs that work as components).
Do you not see the contradiction here? If there was only GNOME, then you'd be stuck with GNOME's failings (whether obvious bugs like segfaults, or more subjective failures like their attempts to kill tray icons). Choice and diversity are the strength of *nix.
I think what they were getting at is that instead of the community developing one really great desktop environment they have divided themselves and ended up developing two desktop environments that are merely OK.
This is because someone's great feature is another's broken mess.
For example, i cannot stand vsync anywhere with the only exception being video playback. I want my windows to follow the mouse precisely, not lag behind a few pixels, i want my games to react instantly, not lag a few milliseconds, etc. Yet GNOME, elementaryOS and even Wayland's whole design force it (well, Wayland could be implemented without vsync, but is anyone doing it?).
At the same time you have people complaining about tearing and when they force composition everywhere to fix it, they do not mind (or sometimes, even notice) the lag.
Similarly, i like how X allows composing applications and environments out of individual components (ironically this sort of application composition lends itself to the Unix idea of one app per role, but most modern toolkits ignore that feature so we ended up with almost nothing really supporting it unless you go raw Xlib or ancient toolkits like Xaw or Motif).
Others see it as anathema and the root of all evil (ok, i cannot put some more concrete negatives for this as i cannot comprehend how someone would dislike it, yet i always end up arguing with people - especially GNOME/Gtk+ people /for some reason/ - over at Reddit about it :-P).
There is no way to please everyone, so you have to allow for choices. Or deal with people constantly complaining about their lost choices, that works too i suppose :-P.
I wasted the last two hours pinpointing what was causing my firefox instance to be stuck with the mouse cursor hand icon. Turns out drag and dropping url shortcut to a nautilus window freezes ff (well, it freezes the mouse cursor. edit: so firefox has to be killed because it doesn't respond to click or keyboard inputs anymore).
That's not a broken mess being someone's great feature.
It is (should be) a showstopper; in 2017, in Ubuntu. I found mention of the bug as early as march 2017 in fedora 2x (I think). I believe it's now lost in triage after not being taken care of then closed because that version of the distro is eol (until someone resubmit). How such a bug is shipped is beyond me. But hey, I could hack a patch.
The Right Way (TM) to do it is to always have V-SYNC but use a screen with high enough refresh where you can't notice the latency. 60 Hz is probably good enough for 95% of people, and Apple is moving to 120 Hz in devices like the iPad Pro.
I have my reservations about 120Hz being the right way since if you can see tearing at 120Hz without vsync it means that you can still perform actions that the refresh rate cannot keep up with and the vsync will introduce a delay. But without experiencing it myself i cannot be 100% sure. I am nitpicky about lag and reaction though so i have a feeling i wont like it unless it really is perfect :-P.
However regardless of vsync or not, i also dislike composition because it introduces yet another source for lag and at best you are at least a frame behind - unless your window updates are synced with the composition (which wont be because if that was possible then programs would be able to affect the compositor's own performance - imagine a game running at a 30fps in windowed mode, it would cause all window updates, etc to run at 30fps too if the updates were synchronized).
So, yeah, i'd rather stick with my compositionless, vsyncless, direct to front frame buffer X11 :-P.
EDIT: there is actually a way to have composition that works without lag and that is for the GPU to do the composition itself during the monitor refresh.
> One feature that differentiates Linux from other systems is the sheer amount of choice that you get.
It's a difference but not necessarily a feature if, as a user, you don't have the background to make informed and safe decisions about those choices.
Why would everything have to cater to the lowest common denominator on the scale from computer-novice to power-user? Those can just use Windows or Mac OS, or maybe Ubuntu. But why would their use cases invalidate mine?
Also, IDK how things are on Apple computers but on Windows no two apps look similar, so the common user should not have more difficulty with diverse UI paradigms on current Unices than on Windows.
In that case you can simply use a distribution that makes the choice for you. Generally people recommend Ubuntu or Linux Mint here.
The Linux distributors might be able to solve this. Obviously there are tons of Window managers and a dozen of consistent, working Desktop Environments. Ubuntu already made clear that they are moving away from X and moreover that a better UX on the WM part will be a priority.
I use Linux again since this year after working almost only on OS X for 5 years or so. It has been more setup work than I wished but I'm really satisfied now. OS X just lacks the transparency and customizability that Linux easily provides.
Not sure how Fuchsia will evolve. There are more Operating Systems than Window Managers. But only 2 Operating Systems with good driver support. Their names are Linux and Windows - OS X runs only on a handful of configurations. Android's Hardware support is a mess.
Either they will support running Linux or Android drivers, or otherwise their system will be just useful for Marketing demos.
> something like a native Electron minus the memory and CPU overhead
I guess this is exactly what Chrome OS does. Or just use Firefox/Chrome on Linux. At least Firefox still has a working Marketplace...
>Either they will support running Linux or Android drivers, or otherwise their system will be just useful for Marketing demos.
Are you just saying that, or do you actually believe it? You think all of the phone vendors that write custom closed source drivers for Android will abandon Android if the core is no longer Linux? Seriously? And what, move to tizen? Windows phone?
Be reasonable... Google can and will move to a different core, and all of their partners will move with them.
Ok yeah maybe I don't believe it. But then we have to throw away all other (incl older/non-Google partner) hardware to use this system. Could work, but not sure if the user really benefits of that in terms of freedom or garbage.
Not only will their partners move with them, but they'll probably offer any assistance to accelerate the process. Android OEM's and Linux have a complicated relationship.
Here’s a list of syscalls: https://github.com/fuchsia-mirror/magenta/blob/master/docs/s...
That's a bit outdated (magenta was renamed zircon a while back and that mirror is overdue for deletion). Try:
https://fuchsia.googlesource.com/zircon/+/master/docs/syscal...
https://fuchsia.googlesource.com/zircon/+/master/docs/concep...
Interesting. More influence from Windows (NT) than I expected.
Things are changing. Gnome shell with Wayland is quite ok from a user perspective. It took me a while but I personally learned to like the bunch of gnome apps.
>The solution I want is a Linux OS, a WM with a good, cohesive, long term vision, and an easy way to build apps within at vision — something like a native Electron minus the memory and CPU overhead. I believe Google could do this.
This is literally MacOS/Cocoa/AppKit minus Linux plus BSD
More or less. Not sure about appkit, but the Apple dictatorial control bothers me. E.g., Apple apps on iOS have tight integration unavailable to other apps, and they don’t improve their apps for power users or broader use cases.
Also, the first thing I do on non-GNU/Linux systems is install GNU tools :)
>More or less. Not sure about appkit, but the Apple dictatorial control bothers me. E.g., Apple apps on iOS have tight integration unavailable to other apps, and they don’t improve their apps for power users or broader use cases.
This is what drove me away from the whole ecosystem, iOS included. The fact that all of their APIs are closed source headers only is a complete nightmare. I'd rather spend my time working with open Web standards than digging through some arcane Apple manual page.
Or GNUStep, if it had ever taken off. API-compatible with lots of Cocoa code. Unfortunately, it's not a complete reimplementation, and nobody uses it.
Personally I think it has more to do with Licensing, Google has long opposed GPL and user freedom
They support Developer Centric Opensource, they do not support User Centric Free Software
It's uses a microkernel so probably around 100x-1,000x fewer lines of code, which should improve security.
Smartphones are one thing, but I think the recent trend of using the Linux kernel in self-driving cars is a terrible idea that we'll only start regretting 10-15 years from now.
Using a microkernel just pushes the vulnerabilities to the userspace.
Yes. That is part of the idea. So the vulnerability is isolated and does not automatically compromise the entire system.
Who cares? In consumer devices, userspace is the entire system.
Well, not in i.e Android.
For example, I have an app that has a vulnerability (let's say my alarm app accidentally runs unauthorized code). What can it do? Nothing. It can't read from my banking app, it can't get my SSH keys, it may not even be able to read from my SD card.
But what happens when my Linux kernel is also compromised? Any app can get root.
It's not just one userspace. Fuchsia is capabilities-oriented, when apps are sandboxed by default and only get access to the services it has been granted access to.
> Aside from control, what does this provide over Linux?
More software (eventually) for end users? And maybe since it's a RTOS, perhaps this is also to be used by Waymo?
> I believe Google could do this.
I keep seeing people day this, and yet: Is there any evidence at all that Google can pull off designing something like this?
If it's worth anything, they do have some people on board that have done OS dev before [0]. Swetland had commented in this thread and other Fuchsia discussions on HN.
[0] https://www.theregister.co.uk/2016/08/15/googles_new_os_coul...
I'd be more interested in why you think Google can't. They have some of the best OS developers working on it in addition to a small army of developers working on it day and night.
1. Google's only consumer OS is Android. Even after literally years of development it's still riddled with inconsistencies (visual and behaviour), performance issues, security issues etc.
2. It took them literally years to arrive at Material UI design, which is a vast sprawling document which is often internally inconsistent, and even Google can't adhere to it a lot of the time.
3. Their best customer-facing products have traditionally been third-party acquisitions (such as Docs). They have a very inconsistent approach to UI/UX across all of their products.
(I had more, but it's hard to concentrate on the first of January :) )
They can still pull it off, but it's definitely not a given.
>Google's only consumer OS is Android
And why exactly is ChromeOS not a consumer OS?
>Even after literally years of development it's still riddled with inconsistencies (visual and behaviour), performance issues, security issues etc.
You make it sound as if inconsistencies, performance and security issues are isolated to Android. Which is ridiculous considering the plethora of inconsistencies, performance issues and security exploits on other platforms. As for security issues, no Pixel has ever been hacked at a PwnToOwn event while iOS devices consistently are.
>It took them literally years to arrive at Material UI design, which is a vast sprawling document which is often internally inconsistent, and even Google can't adhere to it a lot of the time.
Have you looked at iOS recently? Apple doesn't even follow their own guidelines nor do the vast majority of the special snowflakes on the App store. And I won't even go into the disaster that is Metro/Modern or whatever they're calling it now on the Windows platform.
>Their best customer-facing products have traditionally been third-party acquisitions (such as Docs). They have a very inconsistent approach to UI/UX across all of their products.
Mac OS was a third party acquisition and iOS was created from that so you could make the argument that even iOS is the result of a third party acquisition. The fact is these products would never be as successful as they are today without the resources and money its taken to get them to this point.
>They can still pull it off, but it's definitely not a given.
It's not really a question of if they can pull it off, but rather when they'll pull it off. If nothing else Fuchsia will be the new OS used by Google internally replacing their current customized Linux distribution. And if that's the extent of Fuchsia's use then it'll still be considered a win, but that's not where it's going to end in my opinion. I see Fuchsia surpassing Linux in desktop OS market share rather quickly and eventually challenging Mac OS in the long term.
Isn't that basically Android?
> IMO, the Linux desktop problem is due partially to relying on the antiquated X Windows system, but mostly a lack of funding a good vision, e.g. Unity.
What is funny is that Linux on the desktop was closest to actually happen when KDE was happy emulating Windows rather than being its own thing, and doing so quite well on top of X.
As for X being antiquated, F that.
The actual commit includes some details on disk paver.
https://github.com/fuchsia-mirror/docs/commit/520ed01fd6f258...
Linux is the last Unix and that is ok. Unix is a philosophy, not an implementation. Linux mocked Mach, but was usurped by the hypervisor, violently corralling Linux in a microkernel environment anyway. Lots of things are in the kernel that don't need to be, making them non-updatable, as someone else controls the keys. That past, present and future is awesome. But lets build from the past and build the future. Not saying Fuschia is it, but as Unikernels and Exokernels have shown us, Linux itself is just an app in the stack. Unix is a framework for running processes. Your domain problems are the real problems, the OS is an implementation detail.
I wish Google was more outspoken about their plans for Fuchsia. So far, all we have is a lot of speculation.
I think they don't know themselves what will become of it. It's more a research project similarly to Midori.
I think there's a lot more going on than they are publicly letting on.
If the general public starts to believe that there is an Android successor in the works, many people will stop buying Android devices until further notice. This could be absolutely catastrophic for the Android device market.
If I were Google, I would bury the name Fuchsia, call the thing Android 10, and let it be known that it's years out.
>If the general public starts to believe that there is an Android successor in the works, many people will stop buying Android devices until further notice.
The bigger danger is the response by android developers. Will you invest in a soon-to-be-deprecated Android Native app written in a soon-to-be-deprecated API (because, at least according to the Internet rumors, Fuchsia will be using Dart instead of Java and will have a totally different API), rather than just write it in Xamarin, Cordova or React Native?
And once your apps aren't written in Android API, how hard is it to port to, say, Windows Phone?
This damages their moat.
>Fuchsia will be using Dart instead of Java
The current sysui is flutter/dart based, Flutter is google's newish mobile app sdk, the guys who started it have work on the chrome team, it's in alpha and has support android and ios.
https://github.com/fuchsia-mirror/sysui "Armadillo is currently the default system UI for Fuchsia. Armadillo is written in Flutter"
Here is an example of a rust programme, Xi editor, using flutter for the ui. https://github.com/fuchsia-mirror/xi
This is silly. No way they would launch such a thing without ART support. Even ChromeOS is running Android apps now.
From what I understand, with Dart/Flutter you can write apps for Fuchsia, iOS and Android with the same codebase. I think this could be interesting for devs. If making Fuchsia apps is easier and more pleasant than making Android apps, developers will respond positively.
Yes, this is more like it. Most likely it was someone's 20% project (if it still exists) and someone higher up the command saw some potential in it and gave them more resources. Google can afford it. At worst, it can fail, at best it can give Google a technologically superior OS. Plus, they can always use parts of it or lessons learned from it elsewhere.
There is also no reason they can't replace Linux underneath Chrome OS with new Foundations. After all Linux internals were never exposed to the consumers in Chrome OS. As long as they port Chrome, they should be able to do it at least theoretically. Same theory could apply to Android but that would be much harder to do I think.
Yes, and there are tons of research OSs. Most of them suffer from a lack of Hardware support. I'd be ready to switch to OpenBSD/Solaris/QNX/other fancy OS tomorrow if all my HW components were supported.
I doubt anything non-Linux based is ever becoming popular within the next 100 years. ;)
They don't need to replace Linux in one big event. They'll probably test and safely move some working ideas to their production OSes (similarly as Span was introduced in .NET from Midori).
There is also some effort in abstracting device drivers in Android (Project Treble) but how it could be used in Magenta is not clear.
Personally I'd be really interested in seeing this OS deployed in production. I like Linux but it's XXI century, we should be slowly adopting basic security principles in our OSes (capabilities, microkernels). But I fear that Magenta will look similar to Android, while it is open source Google will internally use a customized version that the customers will not be able to compile themselves.
Sure, it could be a process and Project Treble goes in the right direction.
However it's insane if you consider how many device drivers there are for Windows and Linux. The large majority of Linux kernel code is drivers. When you consider that Linux still has problems to run on certain hardware, then the problem becomes more obvious.
My bet is, in case this ever becomes a success, then only for a subset of vendors that are willing to cooperate closely with Google. (Oh yeah, and everybody needs to throw away the old hardware.)
I wished efforts would instead go into improving Linux Kernel. It reminds me of Google saying that JavaScript is a complete dead end and must be replaced. So they created Dart which surely has a great design but nobody uses it - and JS is now better than ever.
Moreover it's a bit of a lame excuse that vendors don't update their Android modifications. It already start that Google doesn't manage to get their Linux kernel modifications into the mainline Kernel - almost violating OpenSource principles...
If you're manufacturing the hardware as well as the software , this is less of an issue.
Getting a new OS to run on a new cellphone, or laptop is a surmountable task.
It’s entirely open source (including development and roadmap documents).
Do you mean their product vision? If so, why does it matter when it already exists?
It matters hugely. If Fuchsia is a research project, then I'd keep an eye on it for engineering inspiration. But if Google's leaders plan to promote it as a serious replacement for Chrome and Android, then businesses might want to invest in Fuchsia app development.
I think the strategy is to make Flutter an attractive tool to build mobile apps across Android, iOS and Fuchsia (it already runs on all three today). Even in an alpha stage, it is a really productive framework for mobile development.
So if Flutter hits 1.0 or beta sometime next year, you would have a way to develop mobile apps for today's platform which will require a smaller investment to run on an eventual Fuchsia platform.
More specifically Fuchsia driver development. That's where it substantially differs, e.g. video drivers run in user space rather than being kernel modules.
For Chrome and Android apps, that's just a bunch of user space APIs/ABIs so it doesn't really matter what the kernel is if the abstraction has been really rigorous from the outset (in Chrome and Android).
There probably are no intricate plans for the future on an executive level, besides having some smart people enjoy their work while building something that may or not be useful in the future, depending on a) how their effort, and b) the future turns out.
From Wikipedia: Chrome OS is an operating system designed by Google that is based on the Linux kernel and uses the Google Chrome web browser as its principal user interface.
OK so why not Google Chrome web browser on top of Fuchsia, instead of Linux kernel and the usual user space stuff? Google can call that retrofit anything they want, so it could still be Chrome OS.
I have been following Chrome development, and one of the things being done there is "servicification" of the blink engine - i.e. breaking it up into services accessed using Mojo. Fuchsia is also built with Mojo as the IPC mechanism. So reading the tea leaves, it seems to me that a "servicified" Chrome will be fully integrated into Fuchsia.
Besides this of course, Fuchsia shared many other components with chrome (the use of Skia for example). If we look at the direction for Flutter, Fuchsia and Chrome, it is clear that this is not some casual side-show but a very well thought out strategy.
Not an expert on ChromeOS but it has more than simply Chrome right now, for example it can run Android apps.
It is although true that Google could replace the low layers of Chrome OS as long as it replicates most of the user facing features.
If Google intends to make Fuchsia some kind of Android successor (we have no way to know if that's the plan though), this feature is utterly needed anyway : an Android VM for 'legacy' apps and Flutter for the future.
Fuchsia can display web content, it has a web view that currently webkit based not chrome/blink. I think the plan is to use chrome/chromium web browser, once/if arrives in fuchsia,that will be interesting.
There is actually a fuchsia port in the upstream chromium tree.
Under this OS, who decides what capabilities an app has?
I would love this so much. A Linux OS with the support of a major company and decent UI/UX? Yes please.
Except it's not Linux?