Settings

Theme

Show HN: Hosted Docker-as-a-Service on SSDs for $5

blog.copper.io

387 points by edbyrne 12 years ago · 132 comments

Reader

shykes 12 years ago

Hi everyone, Docker maintainer here. Here's my list of docker hosting services. Please correct me if I forgot one! I expect this list to get much, much longer in the next couple months.

* http://baremetal.io

* http://digitalocean.com (not docker-specific but they have a great docker image)

* http://orchardup.com

* http://rackspace.com (not docker-specific but they have a great docker image)

* http://stackdock.com

EDIT: sorted alphabetically to keep everyone happy :)

  • tekknik 12 years ago

    So I just checked the docker website (http://www.docker.io/learn_more/) and there is still a flag up stating it's not yet ready for production use. Does this mean you guys/gals are gaining enough confidence in it now?

  • sillysaurus2 12 years ago

    The rise of Docker is fascinating. How did you get people to care about it initially? Did everyone immediately see it as a good idea? Congrats.

    • shykes 12 years ago

      We spent 3 months going door to door, making demos to people I knew were working on similar projects or looking for one. We had a good reputation in the ops and systems engineering community because of our work on dotCloud over the last 6 years. Then we bootstrapped the open-source community with that initial group of 30-50 people willing to federate efforts. By the time the project was leaked on hacker news, the github repo and mailing list were already very active.

      Early members of that "seed" community included engineers from Twilio, Heroku, Soundcloud, Koding, Google, Meteor, RethinkDB, Mailgun, as well as the current members of the Flynn project.

      • recuter 12 years ago

        So, hi there. I'd like to invest in you guys, where do I send the cheque? :)

        • recuter 12 years ago

          No response yet, hmm. Seriously, guys, I have a $100 bill here with your name on it.

          • shykes 12 years ago

            I am flattered, but we are already well funded and are not currently looking for new investors. However if you're feeling generous I can point you to a few people who have been making awesome contributions to Docker on their free time. I'm sure they would appreciate donations, or perhaps contract work :)

            There are also several startups currently raising money for a business based on docker. This is bigger than any one company!

          • lost-theory 12 years ago

            I'm sure you can find some of the developers here: http://www.gittip.com/.

    • MadeInSyria 12 years ago

      When you are an ops and spend time finding the perfect way to make "redeploying easier than fixing", docker became the answer.

      I got to meet the docker team (a lot of french dudes in the team!). Very passionate, technically super sharp, and really fun ! They were interested in my point of view and opinion. Plus, there lead dev knows how to party from what i have seen during a meetup !

    • kcen 12 years ago

      I learned about it from creack (lead contributor according to github) @ a startup meetup in SF.

      Things that impressed me:

      1. super passionate

      2. he was very receptive and quick at squashing bugs I reported (real or not)

      3. docker was super portable (the same across all linux distros)

      4. they (the docker team) had real solutions for the long application deployment times that were plaguing me

      Everyone seemed to know it would succeed, which is rare around here.

    • atomaka 12 years ago

      The real question is why Sun didn't succeed in leveraging this technology with their implementation of zones.

  • jrmcauliffe 12 years ago
  • dsissitka 12 years ago

    AppFog's CTL-C: http://ctl-c.io/

  • bacongobbler 12 years ago

    I found this one a couple weeks ago! https://stackmachine.com/

_lex 12 years ago

Sounds like you took the best parts of digital ocean and are trying to push it as a platform with docker baked in. I like. It seems like you're also trying to simplify using docker. I like even more.

Angostura 12 years ago

I love the fact that you keep trying to define your own vocubulary 'Deck' etc, but always have to explain it. Best to stick with the more eaily understood term, rather than invent your own, I think.

Unless you're going to try and trademark them all.

  • kmfrk 12 years ago

    Yeah, these are my only problem with the service; I did not get the metaphors at all, and their descriptions only added to the confusion.

    The simplicity of the service is an opportunity to attract people without a lot of webdev chops, so why not make it super simple?

  • pplante 12 years ago

    I agree with this. I recently wrote a bunch of new Dockerfiles. The Dockerfile name is sort of meh, but "Deck" is not really telling me what its for.

  • peteforde 12 years ago

    Heroku created almost as many new terms as Tolkien, and they seem to have made out just fine.

    • gizzlon 12 years ago

      Looked at Heruko once, but all the custom lingo really put me off it.. But, of course your point about them being successful stands

      • mscarborough 12 years ago

        It's worth a shot if they fit your use case. Once you get past the ninja-speak of choosing size and database, it's really simple.

        For rake/rails apps at least, you just run `heroku run 'command'` and you're done.

    • clebio 12 years ago

      While I agree with your comment, I hope it isn't used as a measure or justification for doing so. I've had the same cognitive problem with Heroku as the parent describes.

    • shiftpgdn 12 years ago

      God this drives my absolutely insane. Elvish marketing speak is such a stupid waste of time. Why can't we stick to commonly accepted terms instead of trying to bake up new "Cloud"esque replacement terms.

  • edbyrneOP 12 years ago

    Thanks - we discussed that a lot - we were trying to make a simple 3 steps process. If we get a lot of feedback that it's confusing we'll ditch it.

    • htilford 12 years ago

      I think that even if [deck drop instance] is clearer than [dockerfile image container] it would be better to use [dockerfile image container] it's the standard set forth from docker, sticking to the standard makes interoperating easier for everyone.

      • rattray 12 years ago

        I agree with this, though I'm biased because I personally find [dockerfile image container] clearer than [deck drop instance]. Explicit > flashy.

    • _lex 12 years ago

      I think you need to define your own language if it's better. 'Deck' is clearly better than 'Dockerfile'.

      Feel free to innovate - you're a startup, and it's what we love about you.

    • gtaylor 12 years ago

      I'd have to agree that the standard Docker terminology would be much preferred. Your business covers what is a pretty cutting edge, advanced concept right now. Your customers are likely to be at least somewhat understanding of the standard terminology. Your custom terminology tripped me up as well, despite having a reasonable grasp of the higher level Docker terminology.

      Other than that, this looks great! I'm excited for you guys.

    • nailer 12 years ago

      I'm in a similar problem space to you. After a year of defining my own 'simpler' terminology, decided to abandon it in favour of being consistent with the more popular albeit complex terminology.

      I hope that saves you some time.

terhechte 12 years ago

I like the idea. Really cool. I've been researching docker a lot lately, and did most of my recent development on Core OS. I do have a question that wasn't immediately obvious: Docker maintains that one should make a container out of every application so that instead of having to install apache + mysql + php in one Ubuntu environment, you'd create three docker containers (apache, mysql, memcache) and run them together and define the share settings, etc. Now here's my question: It seems as if on Stardock, every container would be a seperate (at least) $5 instance? So if I want to run apache + mysql + memcached I'd need to cram them all into one docker container in order to have them on one machine? Or is it possible to use a $5 stardock system and run multiple containers on them, like on Core OS?

Thanks!

  • shykes 12 years ago

    There is a new feature of Docker called Links which allows you to organize your stack in multiple containers and "link" them together so they can discover and connect to each other.

    There's a great explanation here: http://blog.docker.io/2013/10/docker-0-6-5-links-container-n...

    • StavrosK 12 years ago

      I tried to deploy a Django application with Docker a few weeks ago (using a single image with supervisord), only to discover that, during "docker build", I needed the database already running (so Django could create its database), which was pretty much impossible using a single Docker image and a Dockerfile.

      With the new Links functionality, this is much easier, but are you planning to ever have the ability to use a single Dockerfile to deploy an application which may contain multiple images (with links between them)? I want to be able to do "docker build ." and have my application up and running when it finishes.

      • shykes 12 years ago

        > are you planning to ever have the ability to use a single Dockerfile to deploy an application which may contain multiple images (with links between them)?

        Yes, definitely :)

  • tokenizerrr 12 years ago

    It is common to include dependencies like MySQL and Apache in the container of your application. Usually people use supervisord with a configuration file to start all the different daemons needed.

kmfrk 12 years ago

"Docker-as-a-Service", simple, easy-to-understand pricing. Love it.

This is my favourite Docker offer so far. I've been looking for something to replace dotCloud's deprecated sandbox tier for just playing around, and it looks like this fits the bill.

panarky 12 years ago

This is truly awesome, nice work!

I configured and launched a machine with redis and node in less than 5 minutes. Very cool.

How will you isolate instances from each other? My instance appears to have 24 GB of RAM and 12 cores, and it looks like I can use all of it in my instance.

  • aroch 12 years ago

    Docker uses LXC which supports memory and CPU limits

  • CSDude 12 years ago

    You can limit Docker to have CPU weight shares, and also a memory limit. The file storage limits are due to Docker 0.7, and for now you can ulimit them.

  • sprice 12 years ago

    I don't see node as an option. What am I missing?

antihero 12 years ago

One thing that confuses me with Docker is that how do you configure your containers to communicate with each other.

So say I have a fancy Django image, and a fancy Postgres image.

How do I then have the Django one learn of the Postgres one's IP, and then auths (somehow), and then communicates seperately.

Also, the recommended advice for "production" is to mount host directories for the PostgreSQL data directory. Doesn't this rather defeat the point of a container (in that it's self contained), and how does that even work with a DaaS like this? I'm pretty confused. Is there an idiomatic way in which to do this?

Do service registration/discovery things for Docker already exist?

  • shykes 12 years ago

    > One thing that confuses me with Docker is that how do you configure your containers to communicate with each other.

    Docker now supports linking containers together:

    http://blog.docker.io/2013/10/docker-0-6-5-links-container-n...

    > Also, the recommended advice for "production" is to mount host directories for the PostgreSQL data directory. Doesn't this rather defeat the point of a container (in that it's self contained)

    The recommended advice for production is to create a persistent volume with 'docker run -v', and to re-use volumes across containers with 'docker run -volumes-from'.

    Mounting directories from the host is supported, but it is a workaround for people who already have production data outside of docker and want to use it as-is. It is not recommended it you can avoid it.

    Either way, you're right, it is an exception to the self-contained property of containers. But it is limited to certain directories, and docker guarantees that outside of those directories the changes are isolated. This is similar to the "deny by default" pattern in security. It's more reliable to maintain a whitelist than a blacklist.

bfirsh 12 years ago

We're doing a similar thing called Orchard:

https://orchardup.com

We give you a standard Docker instance in the cloud - all the tools work exactly the same as they do locally. You can even instantly open a remote bash shell, like the now-famous Docker demo!

esamatti 12 years ago

The big point of Docker for me is that I can build the container on my machine, run automated tests on it, play with it and then ship it to the production machines when I'm confident that it is working.

If you build the container on a service like this testing it is hard or in some cases even impossible. For example acceptance tests with Selenium.

Gemfile.lock and similar version binding tools help, but prebuild containers bring the deployment stability to whole new level and is the reason why I'm exited about Docker and containers in general.

Do they support prebuild containers?

  • lhc- 12 years ago

    "You can create a Docker file with some easy steps we’ve created, or you can upload your own Docker file and create an instance from that."

    Sounds like a yes.

shtylman 12 years ago

What would be even better is to decouple the idea of a drop from the containers running it. What I like about container approaches is having "machines" I can run them on. So let's say I make a "www" drop or several. I should then be able to fire up my containers into particular types of drops and have them started on those without having to think about the specifics. The benefit of this I'd that I only care about my container running and having some basic resource requirements and not so much the specific machine instance it is running on. I could even co-mingle different containers on types of "machines". Also separating out disk resources from CPU and ram would be good. Maybe you do this already buy it wasn't clear to me.

rmoriz 12 years ago

> We’re using dedicated because running virtual containers on virtual instances seems nuts to us.

but a traceroute points to AWS…

  • ojbyrne 12 years ago
    • rmoriz 12 years ago

      building a virtualization infrastructure on top of another, black box virtualization infrastructure…

      What could possibly go wrong?

    • justincormack 12 years ago

      Those are still virtual, just no one else is on your box.

  • cmaggard 12 years ago

    A traceroute of their blog, or a server you just spun up? They don't necessarily have to eat their own dogfood.

    • rmoriz 12 years ago

      If you offer infrastructure services and don't eat your own dogfood you can't be serious.

      If you offer infrastructure services and don't tell people where and how you provide it, you can't be serious, too.

      • mic159 12 years ago

        But if you host your site on your infrastructure, and it goes down, you can't post status updates to tell people what's going on/ when you will be back online. Its quite reasonable to not host your own homepage or mechanism of updating your customers IMO.

        • rmoriz 12 years ago

          I disagree. Your website should run on your own infrastructure and a separate status page, under a different (sub)-domain should be operated from another AS (autonomous system) e.g. statuspage.io or whatever you like/prefer.

  • edbyrneOP 12 years ago

    Blog and lots of Copper tools hosted on AWS. Stackdock on dedicated (not AWS or other IaaS) hardware.

AhtiK 12 years ago

Great initiative! One thing to be aware is that Docker is using LXC for containers and LXC relies on kernel isolation and cgroup limits. The concern is about the vulnerabilities.

It is comforting that Heroku is also using LXC for dynos. Would be interesting to know how much in-house adjustments to the kernel and LXC has been made to ensure the hardening.

  • bacongobbler 12 years ago

    I work at ActiveState on Stackato, which is a private Platform as a service. Similar to Heroku, only for private hosting (e.g. you host it on your own hardware or hypervisor). We use Docker as of our v3 beta release today (http://beta.stackato.com/). Our use of docker in 3.0+ means that we bring their tuned security along with us (they integrate with apparmor really well, in fact they require it to start up a container). Here's a really good overview of LXC (and docker) security in general: http://blog.docker.io/2013/08/containers-docker-how-secure-a...

chubot 12 years ago

Just curious, how are people building Docker images these days? Doesn't it only run on 64-bit Linux? I have a 32 bit Linux desktop and a Mac and haven't gotten around to installing Docker. At work I have a 64 bit Linux desktop and it seemed to be extremely picky about the kernel version so I gave up.

Are people running Linux VMs on their Macs to build containers?

I like the idea of this service. But both the client side and the server side have to be easy. Unless I'm missing something it seems like they made the server side really easy, but the client side is still annoying.

  • Lazare 12 years ago

    Yes. Emerging best practices seems to be to use Vagrant to create a great development environment, then use docker containers inside that for isolation. The two work together quite well. There's a comment from the Vagrant creator here about that: https://news.ycombinator.com/item?id=6291549

    In short, yes, just run a VM.

    • chubot 12 years ago

      So I already use Linux almost exclusively for development, and VMs are not in my workflow at all. It seems bizarre to build a VM to build a container! Like too many levels of yak shaving.

      • Lazare 12 years ago

        > It seems bizarre to build a VM to build a container! Like too many levels of yak shaving.

        Perhaps, but you just said:

        > I have a 32 bit Linux desktop and a Mac and haven't gotten around to installing Docker. At work I have a 64 bit Linux desktop and it seemed to be extremely picky about the kernel version so I gave up.

        Precisely. Hence VMs, because Vagrant makes it trivial to spin up an instance configured however you like.

        You're basically saying "I have a problem installing Docker, but I don't need a VM because I don't have any problems a VM would solve", but this is nonsense because this is the precise problem development VMs are meant to solve.

        • chubot 12 years ago

          I can see where you're coming from, but my issue is that Docker itself is LESS portable than the applications it's containerizing! It's creating the very problem it's trying to solve. The task I care about isn't to build and deploy a Docker container. It's to build and deploy my app.

          I have a beef with build/deploy systems that have bootstrapping problems. For example I'm hearing from people using Chef that they have to freeze the version of Chef, its dependencies, and the Ruby interpreter (or maybe it was Puppet, I don't use either). To me that is just crazy. My code isn't that picky about the versions it needs, and to introduce a deployment tool like that makes things less stable, not more.

          Take for example Python -- in my experience it's almost entirely portable because Linux and Mac. And I imagine the same is with node.js, Ruby, PHP, etc. Almost all C libraries you need are portable too. So in my ideal world you would only use a VM when you actually need it for the OS/CPU architecture. I suspect for a lot of people that would be 50-90% the time without a VM, depending on how you like to develop.

          I'm working on a chroot-based build system, which in theory will work on Mac and Linux (but not Windows). It does need to solve a versioning problem. Because stuff isn't as portable between Python 2.6 and Python 2.7 on the same OS as it is between Python 2.7 on two different architectures/OSes.

          • Lazare 12 years ago

            I think it might depend on what sort of problem you're trying to solve.

            If you have, let's say, a django app, and you want to be able to run it all sorts of places, Docker is very much the wrong tool; it doesn't run at all most places, and it's finicky to get working. You're better off just getting that one app to run when and where you want. And if you run into any issues, virtualenv will solve it, no big deal.

            If you have a bunch of apps you want to get running (or perhaps a bunch of interlocking pieces of a single stack, or the different elements of a SOA), then Docker suddenly starts to look very attractive. And then you might go to the trouble to get a single gold server image with docker installed and working (or an Ansible playbook, or a Chef cookbook, or a Digitalocean snapshot, or an EC2 AMI, or whatever), and you know you can just spin up a server and deploy any app you want to it. And once you start thinking about testing, CI, orchestration, automatic scaling, etc., it all becomes that much more attractive; you've got these generic docker servers, on the one hand, and these generic docker containers on the other, and you can mix and match them however you like. When you start having more than 1 server and 1 app, that's amazing. Very much worth the cost of entry of having to install docker everywhere...if you need that kind of thing.

            You're focusing on portability between operating systems, but that's not the point of docker; as you say docker isn't portable at all (which should be a strong hint that isn't the problem it solves). But docker containers are portable between servers with docker on it, and with some architectures (or at a certain scale), you will suddenly realise just how useful that is.

            If it helps, consider Heroku (and the other PaaS outfits like dotCloud, etc.). A lot of startups outsource big chunks of their infrastructure to Heroku, and Heroku uses a very docker-like architecture. If you were to shift that back in house, in many cases that same architecture makes sense (largely depending on just what you were outsourcing to Heroku...). ...and sometimes it doesn't. But if it does, docker is probably a core part of any attempt at implementing your own in-house PaaS. And if you need that kind of thing, you aren't going to stop because "well, it doesn't run on OSX"; nobody (well, nearly) is using OSX is production. :)

      • TheMakeA 12 years ago

        All of my projects now include a Vagrantfile. It makes managing dependencies and creating repeatable environments simple.

        The work flow is:

          $ vagrant up
          $ vagrant ssh
        
        Editing and committing changes take place on the host. Running servers, tests, building Docker containers, etc, takes place on the guest.

        You can get a new device ready for hacking on a project in minutes. Just git clone, vagrant up, grab a quick coffee.

        I would definitely recommend putting VMs in your work flow.

        • shykes 12 years ago

          The dominant workflow in docker-land is to ditch the Vagrantfile, use a Dockerfile instead, and sometimes use Vagrant when it helps you get a VM up and running with docker on it (but that Vagrantfile is typically the same across all projects requiring docker).

    • cryptolect 12 years ago

      I don't get the need for Vagrant? Are you suggesting to use Vagrant solely for those not developing on a host capable of running Docker? If my host _can_ run Docker, what value do I get from running it inside Vagrant instead?

      • shykes 12 years ago

        We recommend using vagrant as a simple vm management tool, to quickly get a simple VM on your machine.

        If your host can already run docker containers, you don't need vagrant.

      • Lazare 12 years ago

        Vagrant is a useful way to very quickly get a Docker capable host. You wouldn't use it for production, no.

        For development, if you're running on OS X or Windows (in which case, my condolences), you basically have to use a VM. If developing on Linux, it's a tossup; the complexity and overhead of Vagrant versus the pain and annoyance of fooling around with kernels and dependencies.

        I use a Mac for day-to-day development, so a simple Vagrant VM is a no-brainer. :)

  • dekz 12 years ago

    I'm building docker images on 64-bit linux (ubuntu) and maintaining a repo of Dockerfiles, instead of uploading to the docker repository.

    You need a recent version of the linux kernel that supports Linux Containers. It's best if you can run Ubuntu 13 somewhere.

    > Are people running Linux VMs on their Macs to build containers?

    FreeBSD supports jails which are similar to linux containers in a way, but OSX does not. So unfortunately you're going to have to run a VM, checkout vagrant and docker though. [1]

    [1]: http://docs.docker.io/en/latest/installation/vagrant/

boyter 12 years ago

I love this idea, and want to try it but I have no experience with Docker (on the todo list).

I wanted to spin up an instance of Sphinx Search but no idea how to go about doing it.

Maybe creating a set of tutorials will help with this. I can think of two advantages. The first being customers like myself will love it. Second, similar to Linode and their tutorials it will drives a lot of traffic and establishes your reputation as docker experts. Will probably build a lot of back-links too as people link to your tutorials.

  • frakkingcylons 12 years ago

    Absolutely. Along similar lines, DigitalOcean has done a great job of encouraging the community to write tutorial and articles, and as a result there are tons of resources to get you started with all kinds of ways to use a VPS. Doing the same would be tremendously beneficial for Stackdock.

jaegerpicker 12 years ago

This is pretty awesome. An api to automate deployments/management/monitoring would completely rock too.

erichocean 12 years ago

How is private networking handled between Docker containers?

UPDATE: I'd also be interested to hear about Digital Ocean-style "shared" (but non-private) networking—basically, any network adaptor with a non-Internet routable IP address. ;)

pbhjpbhj 12 years ago

Not being familiar with the subject basically it seems that:

Docker is a simple description of an internet server including the various services required (mysql, httpd, sshd, etc. - the bundle being call a deck).

It seems then you can create a server elsewhere (eg on your localhost), generate the docker description of that and use that description to fire up a server (either a VM or dedicated) using the service in the OP.

Am I close?

Could I use this to do general web hosting?

Edit: and looking at digitalocean.com it appears I can activate and deactivate the "server" at will, so I can have it online for an hour for testing and pay < 1¢?

conradev 12 years ago

This looks awesome! I currently have an AWS box for the same purpose, running a few of my docker containers. Will this support the ADD directive, or the ability to add custom files (config files) into containers?

lnanek2 12 years ago

Wonder if they have an idle/spin up time. Only their one instance plan is $5, but I know I have to buy more than one on heroku to get no idle/spin up time - that or use hacks like constant pingers, etc.. This is important for when I'm doing experiments/UI tests/alpha tests/submitting apps for reviews before they have any consistent traffic, but I don't want them to occasionally get stuck on 15 second spin up times on requests.

  • habosa 12 years ago

    There are some websites that will ping your heroku instance every few minutes for free. Works great for me.

guido4000 12 years ago

I'm not sure about the pricing yet as I can run like 5 or 10 docker instances in one DigitalOcean VM costing 5 dollars per month.

  • sntran 12 years ago

    Probably the differences are your Docker instances run on dedicated server instead, and you have all the setup and preparation and maintaince made for you.

nfm 12 years ago

Looks cool. Here's what I'd love to see: built-in git deployment (ie. take a Dockerfile, build an image from it, and then after a push add the latest source code to /app and start new instances), and some kind of orchestration so you could run a number of app containers behind a load balancer container.

Matsta 12 years ago

Hmm StackDock.com is hosted on a server at Hetzner in Germany.

I don't 100% know if the containers themselves are hosted by Hetzner or not, but Hetzner is more of a budget provider than something you host production sites on.

I've heard many mixed review about their network and mostly their support which isn't up to scratch. We'll see what happens but from what I see, if someone decides to abuse the service, Hetzner might just take down the whole server without warning just like OVH do.

http://www.hetzner.de/en/hosting/produkte_rootserver/px120ss... (I'm guessing they are using something similar to this).It's a pretty powerful and cheap server but if you search hard enough you can find something equivalent in the States for around the same price.

  • thomaslutz 12 years ago

    Hetzner surely has gone downhill over the years (quality and pricewise), and support was better 8 years ago, but to say you would not host a production site there is a pretty bold statement.

    If you need real HA you should perhaps use more than one provider anyway. Or what are your recommendations?

  • gexla 12 years ago

    Of course, with the prices these guys are charging, they are certainly going with a budget host.

    Since Docker is still in beta, it's not production ready yet anyways. Docker could still go through a lot of changes between now and 1.0.

    ETA: Whoops, I got the pricing wrong. It's $5 per instance. I was thinking you would get 1GB of RAM and 20GB of space to run as many containers as you like. That makes it not as cheap as I was originally thinking.

  • killerpopiller 12 years ago

    I'd actually prefer Germany, since data privacy is a law there and not spying.

Touche 12 years ago

Is the pricing for 1 dockerfile or unlimited dockerfiles?

  • edbyrneOP 12 years ago

    It's per instance - so you can have unlimited docker files; you only pay when you create an instance from one.

arianvanp 12 years ago

I Love the idea! really. I just don't like all the UX yet. Some things feel ... off. It might be something personal. I'm not sure. But I guess it's interesting to discuss. "Drops are distilled Decks" The words feel semantically mismatched for some reason. If I think "Deck"I don't think "Config". If I think "Drop" I don'think "Deployable stuff" and I don't see how a "Distilled deck" is a "drop". Also it feels odd that I can create a "New deck" in the "instances" section.

though adding "cards" to a "deck" sounds intuitive.

I'm trying to come up with better terminology. something with ships and containers...

  • cobrabyte 12 years ago

    One thing that surprised me...

    When I created a Deck (default Sinatra Hello World) and converted it to a Drop, it did just that: it removed the Deck and created a Drop.

    I guess I thought it would keep the Deck so that I could see the configuration that I had chosen to create it. Is this a Docker thing where, once you've created it, you don't see the config any longer? I don't think it is but I've not honestly played with Docker yet. $5 a month is a low ask for me to try it out.

    Also, when it comes time to pay for a Deck/Drop and you don't have credit card info saved, it forwards you to that page... but, after entering the info, you're not put back into the process. You're dumped back into the Deck page. That seemed odd to me... wasn't sure if it had been converted or not.

    I wish the word 'manifest' wasn't used in so many contexts because, if you're going to stick with the container shipping analog, it would have made more sense to have Manifests, Containers and Ships. That's just me though... who knows. ;)

    All in all, cool service. Look forward to playing around with it this weekend.

    EDIT: I see that you can create a copy of the Deck that created a Drop... still seems odd that the default behavior is to blow it away upon creation of a Drop.

kbar13 12 years ago

IMO Labels/tooltips should be added to the icons for the cards. Some of them, including the leaf (nodejs?) and the tree (nfi what that is) aren't especially obvious.

Otherwise, cool!

  • jonny_eh 12 years ago

    And when I click on one, the checkmark doesn't disappear until I unhover the mouse.

dylanz 12 years ago

Very cool, and I was waiting for something like this to be built out. Are you planning on having a command line tool to control your deployments?

theunixbeard 12 years ago

I started the default instance with sinatra running, but where do you see the IP address to visit it via a web browser?

j-b 12 years ago

Just signed up but the site now appears to be down, receiving "We're sorry, but something went wrong."

  • edbyrneOP 12 years ago

    Hackernews traffic spike! You can signup and create a Dockerfile - we've just paused instance deployment for a couple of hours as we add more servers. Sorry for the inconvenience.

bionsuba 12 years ago

You should do some A/B tests to confirm, but I but the pricing table at the bottom was a little confusing because the price was not highlighted in any way, and the call to action was round when it is typically a rectangle.

tehwebguy 12 years ago

Looks awesome! Anyone know if there are bandwidth / throughput / transfer charges?

Also, forgive my ignorance, but what would it take to be able to "add containers" in the same way that you can add dynos on Heroku?

MilesTeg 12 years ago

The issue with linux containers is (or at least it used to be) that it is possible for a malicious user to 'break out' of the container. Has this problem been solved?

knotty66 12 years ago

Nice. Looking forward to seeing how this and all the other Docker based PAAS ecosystems like Flynn, Deis, Tsuru, Shipbuilder, CoreOS etc pan out.

  • gabrtv 12 years ago

    I agree this looks very cool. As far as http://deis.io/ is concerned, we're focused more on the "operate your own PaaS" capability, whereas this seems to be a pure hosted service -- which is great for lots of use cases.

    Best of luck guys!

shtylman 12 years ago

Can I use a docker image I have already created?

Geee 12 years ago

Is this production-ready and trusted? Who are these guys? I don't want my apps to be hosted on a quick hack.

kohanz 12 years ago

This looks like an awesome service. And the image on the site reminds me of Season 2 of The Wire - even better!

susi22 12 years ago

Q: Do people have root on the containers?

samtp 12 years ago

Cool service but your branding makes it look like you are affiliated with Canonical/Ubuntu.

secure 12 years ago

Excellent! Will play around with it soon. Thanks for offering this, and best wishes.

aurels 12 years ago

I get a 500 error when logging in, am I the only one ?

gregf 12 years ago

Like the idea, but would like to see hourly billing.

  • edbyrneOP 12 years ago

    Thanks for the suggestion - we are looking at more usage based billing - including per CPU cycle / RAM usage to be a 'true' utility.

cvburgess 12 years ago

This is fantastic!

Does anyone know where DO servers are located?

madisp 12 years ago

Clicking on alpha/deploy leads to 404 :(

jongleberry 12 years ago

where are the servers hosted? AWS? US or EU?

matiasb 12 years ago

Sounds great!

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection