Settings

Theme

Show HN: Microscaling-in-a-box – demo of autoscaling with containers

app.force12.io

31 points by lizxrice 10 years ago · 9 comments

Reader

rossf7 10 years ago

Hi, I'm Ross one of the developers at Force12io. We've done a blog post with more details about the tool.

http://blog.force12.io/2015/10/16/microscaling-in-a-box.html

Happy to answer any questions. Also all feedback is greatly appreciated!

beaker52 10 years ago

Does this scale across multiple docker hosts or just act as a scheduler for a single host?

In what circumstance is this better than Docker Swarm, with it's multi-host scheduling?

https://docs.docker.com/swarm/scheduler/strategy/

  • rossf7 10 years ago

    We're working on a container scaling solution that runs on top of the scheduler. It will scale containers based on current demand. There is overlap with the Swarm strategies but we're focused on auto scaling. So for example we'll use the scheduler to provide fault tolerance.

    The demo tool we've released today is single node and uses the Docker Remote API. We've also built demos that integrate with the ECS and Marathon schedulers. Force12 will be platform agnostic. So we plan to integrate with more schedulers and Swarm is definitely on that list.

    • beaker52 10 years ago

      Most services are developed to make maximum use of a host machine's hardware, so scaling multiple containers of the same image on a single physical host has very limited value.

      Services are typically clustered across multiple hosts to alleviate pain associated with the availability constraints of a single host (e.g. physical resources, redundancy). "Scaling" containers across just a single host is:

      1) an misunderstanding on the part of the operator, or

      2) a very inefficient way of maximising the availability of a service on a single host.

      In the case of 2), if you need to run multiple instances of your web service because one container doesn't maximise the resources of your host, then you're solving the problem in the wrong way. You should be making your service, in a single container, capable of maximising the resources of that host. Otherwise you're going to be wasting your physical resources taken up by the running of unnecessary containers.

      Running multiple containers of the same image on the same host should be limited to testing of clustered services on a local host, admittedly with a few exceptions. I'd welcome any arguments to the contrary.

      • rossf7 10 years ago

        Yes the tool we've released today is for playing with and not production use. It's single host to keep things simple. Definitely agree for production use a multiple host cluster is essential for redundancy.

        We think the faster launch times of containers makes auto scaling easier compared to using VMs. So our tool will prioritise the mix of running containers based on current demand.

        To do this we'll need to support multiple metrics. Some of those will be within the cluster like CPU and RAM. But some may be external like the length of a message queue or requests per second on a load balancer.

phantom_oracle 10 years ago

Is the Go binary open-sourced or is the entire scaling tool open source?

I'm not quite sure what this means:

"Now that we’ve open sourced our Force12 Agent we’ve got follow up releases ..."

aecurrie 10 years ago

I'm also a dev on this (anne) we've had a question, are the scaling containers running locally on your machine? The answer is yes, we just display what's happening on your machine on our portal

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection