Dockerizing a Python Web App
blogs.aws.amazon.comCan someone please explain what is the purpose of this. Why dockerize a python app vs. just running on a Linux instance. Also, do you get forever hooked on Amazon services in the process?
To me the most purposeful usage would be for custom environment deployment in the same way I might use Chef or Puppet. That is far from the intended purpose of Linux Containers, and Docker may not be standardized application of LXC, but earns many points for me with its usage of git and if you are rolling your own webapps in troves and have limited resources it is nice to compartmentalize/modularize/automate your dev/deployment stacks. Especially when you are working with different clients that all have slightly different dev stack requirements it is nice to share resources while remaining isolated.
edit: I have not used containers with amazon web services but it is really nice to have on my VPS as virtualization and even paravirtualization are not typically an option.
A container should make it easier to install and move to another host--assuming the app isn't tied into AWS services.
My main reasons for using docker have been for portability and a development environment that is an exact match to that of production.
It's also very useful in deployment, I have different containers for my web, primary worker and mongo replicasets. Combined with Digital Ocean it makes it very easy for me to quickly spin up new servers and set them up as needed with only a few commands. There is also comfort in the fact knowing that I can do the exact same thing on linode without any modification to the container.
Similar to Jradd, for me, I'm working on creating a training environment (based on an existing Python app) using docker. Using docker for containerization lets me launch instances with a preset disk/mem quota and also handles forwarding ports from the host to the container automatically.
For a single application, docker doesn't add much except for portability. For a single application with many components, docker adds process isolation (and quotas, and ...).
TFA sez the app was "originally written for the Elastic Beanstalk Python environment". Running on docker is certainly more portable than being customized for Elastic Beanstalk. However, this particular app also uses SNS and DynamoDB. If the app is using that stuff, it isn't likely to leave AWS.
No reason. In fact if you're doing things right you're probably testing within a virtualbox or docker instance anyway and that same script can be used to deploy your application to any vanilla cloud provider without getting hooked on Elastic Beanstalk.
Hey, funny, I've spent the last two days doing something like this.
We have a large Laravel app that we've created, and want to run CI/CD on it, along with acceptance tests (all in Strider CD). The thing is, it has some annoying outdated dependencies, and I need a way to spin up an actual instance of our app, with MySQL, Apache and everything else to be able to run Huxley and Selenium on it.
So, I turned to Docker! I'm nearly at a point where we can just do a `docker pull <appname>` and then run the commands we want to from there, but I'm not there yet. It's really interesting, although the focus on "single process that runs in the foreground" stumps me a little. I'd love a way to running `httpd` and other services in the background, but still use `docker run /var/www/vendor/bin/phpunit` -- anyone got any luck doing something similar?
For what it's worth we (Docker maintainers) are working on adding this feature :) In the meantime you can use something like supervisord or runit inside your container.
Yeah runit is the way I think I'm going to tackle it! Cheers :)
Looking forward to it being a feature in the future. I truly think that it's a killer app for us web devs: being able to have a container setup for development, and being able to re-use it for super easy testing and one-click deployment without all the futzing around? Yes please!
Here is a real world example of a run command to set up MongoDB[http://docs.docker.io/examples/mongodb/] with docker that shares a common db located on the host dir "/srv/pool/db" in my scenario.
MONGO_ID=$(sudo docker run -d -v /srv/pool/db:/srv/data/db:rw jradd/mongodb)
Same usage can apply to postgres, mysql, redis, etc…im going to dockerize my python if you know what I mean