Always use a virtual environment for your software development
Hello friends. I am a DevOps engineer who deals with development , production , monitoring , configuration management etc of a Software. But for every developer , there will be few house keeping things which are quite irritating. You are having a single Ubuntu system and for the project’s sake , you will install many database servers and other applications in it. The dependencies for your applications will cause trouble , when you are installing both your favorite games and working projects in a single system.
What if something went wrong in your working project and everything in your system is messed up. So I always suggest you to separate your work with personal things. In this tutorial, I am going to show you how a developer can run his/her project and its applications in a light-weight virtual environment called container. This container is created by the Docker . We can access docker container from the host system for code editing and view execution. So always use a virtual environment for your software development.
What is Docker?
- Open platform for developers and sysadmins to build, ship, and run distributed applications.
- Light weight containers
- It’s fast , not like a VM
- Minimal resource usage
- Run thousands of containers
- Easy to run your whole production stack locally
Docker can create 100 light-weight environments in a single laptop. Unlike a virtual machine , your docker container will launch in one second. It will give an isolated environment for a developer to work with. In this post , I am going to create a docker container and setup a Python Django project and push it to my cloud repository.
Docker achieves its robust application (and therefore, process and resource) containment via Linux Containers (e.g. namespaces and other kernel features). Its further capabilities come from a project’s own parts and components, which extract all the complexity of working with lower-level linux tools/APIs used for system and application management with regards to securely containing processes.
Main Docker Parts
- docker daemon: used to manage docker (LXC) containers on the host it runs
- docker CLI: used to command and communicate with the docker daemon
- docker image index: a repository (public or private) for docker images
Main Docker Elements
- docker containers: directories containing everything-your-application
- docker images: snapshots of containers or base OS (e.g. Ubuntu) images
- Dockerfiles: scripts automating the building process of images
Installation Instructions for Ubuntu 14.04
$ sudo apt-get upgrade $ sudo apt-get install docker.io $ sudo service docker.io start $ sudo ln -sf /usr/bin/docker.io /usr/local/bin/docker $ sudo sed -i '$acomplete -F _docker docker' /etc/bash_completion.d/docker.io
Using the Docker
Now we use Ubuntu as our base image for creating containers, then modify a freshly created container and commit it.
$ sudo docker pull ubuntu
this pulls the base ubuntu image to your system. After a successful pull we can launch a new container using following command.
$ sudo docker run -i -t -p 127.0.0.1:8000:8000 ubuntu
This creates a new container with container port 8000 forwarded to the host port 8000 . The flags mean this:
-i attaches stdin and stdout
-t allocates a terminal or console
-p forwards ports from container to host
Things a developer needs
1) Edit Project code from host in your favorite editor.
2) Run project in that virtual container environment
3) See the output through the port that forwarded.
So in-order to edit code you need to access container data in your host. Instead you just mount a host directory in your docker container. Then your project code is in host ,but runs in the container. Cool right?.
So we will modify the command for running the new container as this.
$ sudo docker run -i -t -p 127.0.0.1:8000:8000 -v /home/hproject:/home/cproject ubuntu
Now a docker container will be created and started with following things.
* port is forwarded on localhost:8000
* Data volume “/home/hproject” of host is mounted as “/home/cproject” in the container. It means the files lies in the /home/hproject of our host system is accessible perfectly from the container at /home/cproject.
These two things are required by a Django developer because , he need to modify code and view output through the browser. Now he don’t care where code is running. But here code is running in isolation. That too in a light-weight docker container. Vagrant has same port forwarding and volume mounting strategy but Vagrant is a VM , Docker is a VE.
Play with the container
Now we have a container started . It looks like bash with # symbol. So now install following things in it.
* Setup Django ,MySql and your Project
That’s it. You can install any thing that required to the project.But remember after doing stuff , use exit to come out of container.
Now if you do not commit the container, all changes you made will be lost. So commit the container first. Before that find out the containerID by typing this command.
$ sudo docker ps -a
and find out the latest container exited. You can give a name to a container while committing
$ sudo docker commit b03e4fb46172 mycontainer
This will commit the latest container we played till now and also gives it a name mycontainer. If once again we wish to launch container for working with our project , then use following commands.
$ sudo docker start mycontainer $ sudo docker attach mycontainer
That’s it. You will enter into a virtual container where you can run your Django project. If you plan to remove a container just do
$ sudo docker rm mycontainer
If you wish to push your container to the cloud , you should use the Docker hub registry. Don’t forget to commit after coming out of container. You can carry your entire project with environment anywhere. Package your Project (Django ,Flask) + Environ (MySql,PostgreSQL,Redis) to a tar file and export it to any place. That is the magic of Docker. For doing that just export a container to a TAR file.
Exporting the containers
$ sudo docker export mycontainer> /home/mycontainer.tar
Instead you can also save images and carry them over system through FTP and then load it in target node.
Loading and saving the images
$ sudo docker save mycontainer > /tmp/mycontainer.tar
and then load it in target host as
$ sudo docker load mycontainer < /tmp/mycontainer.tar
References for dockerizing your mind