- Isolating an application dependencies
- Creating an application image and replicating it
- Creating ready to start applications that are easily distributable
- Allowing easy and fast scalation of instances
- Testing out applications and disposing them afterwards
The idea behind Docker is to create portable lightweight containers for software applications that can be run on any machine with Docker installed, regardless of the underlying OS.
Above diagram shows how a traditional VM is configured for running software applications. The Hypervisor is the layer that does actual virtualization, in which it takes computing resources from the Host Operating System and use them to create fake virtual hardware that will be then consumed by Guest Operating Systems. Once you have the guest OS installed, you can actually install your software applications, binaries, and libraries supported.
VMs are great for providing complete isolation of the Host OS, where if something goes wrong in an application or your guest OS, it won’t impact the host OS or screw up the other guest OSs. However, all this comes at a great cost, because ultimately the server that’s running the stack has to pay a huge amount of computing resources to the virtual app.
With Docker, you’d have a thinner layer:
The rule of thumb in Docker world is to have one Docker container running on your server for each process.