Docker on Windows Server 2016 Technical Preview 5
blog.docker.comThe only drawback of the current onslaught of Microsoft activity is the fact that most "enterprise" businesses and software products are completely unprepared or unwilling to move in step.
As an integrator, I've barely started seeing Win2012 in production last year, so it will be another 5 years before I can start playing with these new toys. Hell, some people are still chugging along with 2003sp2... and I'm talking Big Business, not some godforsaken countryside school.
And one might argue they are taking the sane approach to the upgrade treadmill. It's easy to forget technology is supposed to solve real problems and that there's no economic incentive to use new toys to solve already solved problems, possibly introducing new issues with less well-known solutions.
The fact businesses are extending the life of systems not because they are unwilling to invest in upgrades but because they work just fine is a sign that the industry has reached a good level of maturity. It's a good thing. We should all be working collectively to solve new problems instead of reiterating over the same problems again and again.
In the old days of NT4 a 4-year old system would accumulate maintenance costs. Today, Windows 2003 is 13 years old and still pretty serviceable.
I'd be more worried about businesses accumulating unsustainable technical debt than accumulating old (but stable) technologies.
Windows 2003 is end of support life and you aren't receiving security patches unless you're paying Microsoft a huge amount of money for extended support.
One might argue that it's insane to run a 13 year old OS that is not getting security updates any more at your business.
Windows 2003 is EOL because Microsoft wants to push customers to the latest version. Customers have been pushing back at this for a while now, but Microsoft (and other vendors) makes more money selling the new shiny than extending their products' lifecycle.
In an ideal world, operating systems (server and desktop alike) would already be on a 5-year release cycle with just yearly incremental upgrades in between (as much as the vendor can manage in a service-pack model).
Is it insane to run systems without any security updates? Even within the lifecycle of the product many businesses never even patch after the initial install. I personally know people that live by this: never patch anything unless presented with proof that it's necessary to do so (I don't completely agree with this, but money has been lost catering for low-impact security updates and people tend to learn a few lessons from it).
Security is more about risk management than being free of vulnerabilities. The issue isn't going by without security updates, is doing so without assessing the risk.
Correct me if I'm wrong, but hasn't Microsoft introduced a lot of security features into the versions since 2003?
>Today, Windows 2003 is 13 years old and still pretty serviceable.
Unless you're maintaining Server 2003 itself, at which point it comes with 13 years worth of problems.
Absolutely. I'm involved in systems design & IT strategy for a very large institution. By and large we've found literally zero advantages imparted to end users by Windows 10, besides being forced to deply it by MS deciding to early end-of-life Win7. Some cool features in Server 2016, but nothing that will affect a user's workflow in the slightest.
I was a bit of a Windows 7 stalwart until recently. I upgraded my parents' Vista machine to Windows 7 ... very sluggish. Thought I really needed to get them an SSD at some point. Then I upgraded it to Windows 10 on a whim - incredibly snappy. No need for SSD yet.
Sure, snappier on older hardware is a plus. But we've got a 4 year replacement cycle, and a slight increase in snappiness isn't what I'd call a major upgrade.
You dodged a $40 bullet there.
I absolutely agree! I guess my complaint is that MS is developing all this new stuff with no regards for the installed userbase -- they are all add-ons for Windows 10 and 2016 Server, rather than standalone cross-version apps that could be deployed sooner on existing infrastructure. It's obviously easier for them, but it means "normal people" will not be able to enjoy most of these advances for a very long time.
Hell, some people are still chugging along with 2003sp2
I'd think this is for the same reason XP lasted so long --- it's a good stable platform free of all the hassles and complexities of the ones that came after it. A lot of home users are already quite disturbed by the privacy implications of Windows 10, and if anything big businesses would be even more cautious of the same for its server counterpart.
I'd be interested to see a survey of home users with regards to privacy. I think a lot fewer care than we'd hope.
and all of those home users are on Facebook already, so it probably doesn't matter.
Just because you're on facebook doesn't mean everyone is.
I'm one of an extremely large number of people who haven't made your mistakes. We tend not to bring it up because people like you go super-aggressive-defense and accuse us of being paranoid if we do.
Don't mistake our not bring it up for us not existing.
Not on facebook.
By the same logic; I am a UK citizen, so it "probably" doesn't matter.
I don't see how this is a drawback. Maybe not 5 years but this stuff is still cutting edge, untested and unproven. Do you really want to use this kind of stuff in production when most of the issues haven't even been found yet let alone fixed ? No best practices, no other people mistakes to learn from ? Maybe tomorrow Microsoft decides it wasn't such a good idea and starts moving away from with (TBH they are better than most in this regard but still).
Still doing .NET 4.0 and Java 7 for production code here, exactly because of these type of issues.
If this starts utilizing the "Ubuntu on Windows"[0], I think, with minor modifications, most of the images should already work; which is great. I can't wait to see how containerization (and tools built for orchestration of those) will evolve in the coming years to help dev/devops.
[0]: https://insights.ubuntu.com/2016/03/30/ubuntu-on-windows-the...
Paul Thurrott quoted a Microsoft person that Ubuntu on Windows is a client side thing. It may be possible to use it on Windows server but I believe that is not the meta.
I would say it is easier to just spin up an Ubuntu instance than to use Ubuntu on Windows on the server. What are some use cases that you're thinking of? Am I missing something?
The use case is being able to run all the zillions of existing Ubuntu-based Docker images on Windows Server. If the Ubuntu-using Dockerfile is mostly made up of 'RUN apt-get install' commands along with a few commands like `RUN cd /tmp; wget https://ourhost.foo/some.config -O /etc/someapp/some.config` then it just might work with the Ubuntu-on-Windows thing.
There are two parts... there's the Linux Subsystem for Windows, which is the Linux ABI... Ubuntu for Windows is all the Ubuntu userland running via LSW. It's probable that this Docker system works with LSW as-is.
My question is why run on Windows? Wouldn't it be better for the host to be GNU/Linux if you don't need any of the functionality that Windows provides?
I fail to see the benefit of using Windows on the server unless you need Windows. Even Microsoft send to acknowledge that. I must be missing something.
They're probably doing it so that docker containers running in Azure infrastructure can be a bit lighter than it currently is. That's probably the main reason. Second, if you are already running windows servers, because you have software/infrastructure that require it, this allows you to also run Docker images, and Linux software without the overhead of full virtualization and/or new hardware.
Beyond this, there are a lot of developers running windows either by choice, or because they have software requirements themselves that can leverage this.
If they keep improving the Linux subsystem in the Windows kernel and it starts offering a complete POSIX layer (i.e. the syscalls we expect in a Linux system), then one day the Linux-based Docker images could run unmodified.
It's probable this preview is actually using the Linux Subsystem, which is fairly complete... And it makes sense.. it also makes a lot of sense in terms of the direction that .Net core has been taking, so that they can be deployed under Linux docker containers more easily.
Although I find a lot of the Azure services a bit too low-level, it's been very easy to build simpler interfaces on top of them, and get a lot done quickly. If this is anywhere near as nice on Azure as what the Joyent cloud's docker support looks like, it could be very interesting indeed.
If anything I guess Docker has become Go's killer application, even Microsoft is giving contributions to it as a means to help Docker run better on Windows.
I look forward if it can help OCaml as well, given their acquisition of Unikernel Systems.
What about licensing? Does the docker image manifest include machine-readable licensing metadata required for automated licence compliance verification?
The main windows server 2016 page has pretty clear guidance on this[0]:
---
OSEs/Hyper-V containers
DataCenter: Unlimited
Standard: 2
----
Windows Server containers
Datacenter: Unlimited
Standard: Unlimited
---
[0] - https://www.microsoft.com/en-us/server-cloud/products/window...
What about running Docker containers on desktop/laptop machines during development? It's a major part of the Docker usecase.
Trying to keep track of Windows licensing compliance across multiple versions and deployment models is confusing enough as it is. Different sources will give different answers to the same questions when interpreting licensing scenarios, and you can never know unless you get audited (?)
Different sources will give different answers to the same questions when interpreting licensing scenarios
1.) Containers are not available on desktop SKUs. So there is no licensing consideration for Windows 10. If they later add containers to Windows 10, then they'll release licensing rules at that time.
2.) If you are running a server OS for your desktop, then the licensing is pretty clear. Hyper-v VM containers follow the same rules as normal hyper-v VMs (1 physical + 2 virtual for standard then each additional VM requires a license, unlimited for datacenter). For Windows Server Containers (which are not hyper-v based), it's even easier, it's unlimited regardless of edition.
3.) As always, the host OS must be licensed fully in order to have the appropriate rights (2016 is moving to core licensing, with a minimum of 8 core licenses per processor, and a minimum of 2 processors).
All in all, it's one of the easier features to understand the licensing for since it doesn't directly deal with CALs or internal/external usage rights.
Contrast that with the license considerations running Docker on Ubuntu:
1.) Containers are available on both desktops and servers because they're fundamentally the same OS (but with different sets of packages installed by default).
2.) As always, the server and desktop versions of Ubuntu available for unlimited use with zero licensing costs. Completely free.
I'd also like to add that the "base" Windows Server container image is 9.3 gigabytes while the base Ubuntu container image is 120 megabytes. Put it all together and you wind up with vastly greater costs to run Docker containers on Windows.
Having said that, if your application only runs on Windows then putting inside a container might not be a bad idea.
1.) and 2.), ok. As pointed out, Windows Server containers don't have any additional licensing costs above and beyond the base host OS license. Presumably, if you already paid that cost, then you feel that it's reasonable to do so.
As for the size, I have the feeling that people that are going to run windows containers as a herd are going to opt for the nano image which is 793.3 MB. Still about 6.5 times larger than the ubuntu container image you mention, but 11.5 times smaller than the servercore container image. Particularly since the nano image is focused towards individual roles (IIS, dns, etc) which work well with the 1 task per container philosophy.
All in all, I'm not sure what the value is in comparing the two solutions (that isn't already covered in the Linux vs Windows threads). You can't run Linux containers on Windows, and you can't run Windows containers on Linux (without running an actual virtualized workload). So it seems pretty clear that you choose Windows if you have Windows servers to containerize, and Linux if you have Linux servers to containerize.
Currently Windows containers are only available on Server not on desktop at all, hence there is no licensing information available for desktop.
Windows Hyper-V Containers are available on the Windows 10 Client, through the insiders program. This is intended for development scenarios. You can run your Windows Containers locally, building up your app/service, then deploying on servers running the Container feature. We're still early in the Windows container development efforts. http://aka.ms/DockerToolsForVS. The experience built for Linux containers will support Nano Server as well. In the current Windows insiders program, you can run Nano Server based containers locally, which you can develop .NET core apps against.
Traditionally, you've been allowed near anything for development and testing purposes using an MSDN subscription.
I would assume that also applies to VMs or containers.
That was my first thought, too. At first sight, Windows containers seem to run only on Windows hosts. Maybe the licensing is covered through the hosts' licencse (speculation: you can run n Windows containers if you buy license x for the host OS).
> you can run n Windows containers if you buy license x for the host OS
Licenses for Windows Server generally convey some rights for virtualization. https://hyperv.veeam.com/blog/virtualization-rights-windows-...
Any word on this trickling down to Windows 10 too? It'd be unfortunate if you had to run VM-based docker on regular development machines IMO. Probably better to just run Windows Server on those machines in that case.
See above for Windows Container support (Nano Server) in the current insiders program. However, also note, this is very early, and the experience is a little rough as we stitch together all the moving parts that are coming together for what we believe will be a great local development experience of containerized apps, targeting Linux and Windows workloads.
One of the claimed benefits of Docker is that developers build and test the same images that are run in production. It seems like multi-platform support negates that.
Multi-platform seems like a good direction to go in, so it might be a worthwhile tradeoff.
The target of Multi-Platform support is to allow workloads to target different platforms, but not as a single container can run on both. Consider the scenario: Web Front End built with Node.js, targeting Linux ASP.NET Web API, built with NET Core, targeting Nano Server (Windows) Redis Cache, used for the WebAPI, run as a Linux Container. When you spin up these containers, you should be able to say docker-compose up, and docker, Mesos, ... know how to route the containers to the appropriate hosts. Steve