Automating Docker Builds With Gradle

We’re all using Docker containers these days to do all sorts of great stuff, but it’s not always obvious how best to automate the building of images and embed good repeatable processes into our projects.

Get started with Docker for Mac

Docker Notes

Check versions

Ensure your versions of docker, docker-compose, and docker-machine are up-to-date and compatible with Your output may differ if you are running different versions.

$ docker --version
Docker version 18.03, build c97c6d6

$ docker-compose --version
docker-compose version 1.22.0, build 8dd22a9

$ docker-machine --version
docker-machine version 0.14.0, build 9ba6da9


  530  which mysql
  531  docker image rm phpmyadmin/phpmyadmin
  532  docker image ls
  533  docker image rm phpmyadmin/phpmyadmin
  534  docker image rm 126b8717cebb
  535  docker image rm cf2a5040f044
  536  docker image
  537  docker image prune
  538  docker image ls
  539  docker image phpmyadmin/phpmyadmin
  540  docker image rm phpmyadmin/phpmyadmin
  541  sudo docker rm phpmyadmin/phpmyadmin
  542  docker image ls
  543  docker rmi -f 126b8717cebb
  544  docker rmi -f cf2a5040f044
  545  docker rm  cf2a5040f044
  546  docker stop cf2a5040f0441eed4fe486f6f0c109ff4a8a424f44d802d682a0fe70786d88db
  547  docker image ls
  548  docker image rm phpmyadmin/phpmyadmin 
  549  docker rmi -f cf2a5040f044
  550  docker rm  cf2a5040f044
  551  docker image rm phpmyadmin/phpmyadmin 


$ docker images -a | grep "activity" # to list images
$ docker rmi --force 9766f4e4f73f    # to delete image

Explore the application

  • Open a command-line terminal and test that your installation works by running the simple Docker image, hello-world:
$ docker run hello-world

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:ca0eeb6fb05351dfc8759c20733c91def84cb8007aa89a5bf606bc8b315b9fc7
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.
  • Start a Dockerized web server. Like the hello-world image above, if the image is not found locally, Docker pulls it from Docker Hub.
$ docker run -d -p 80:80 --name webserver nginx
  • In a web browser, go to http://localhost/ to view the nginx homepage. Because we specified the default HTTP port, it isn’t necessary to append :80 at the end of the URL.


Note: Early beta releases used docker as the hostname to build the URL. Now, ports are exposed on the private IP addresses of the VM and forwarded to localhost with no other host name set.

  • View the details on the container while your web server is running (with docker container ls or docker ps):
$ docker container ls
CONTAINER ID   IMAGE   COMMAND                  CREATED              STATUS              PORTS                         NAMES
56f433965490   nginx   "nginx -g 'daemon off"   About a minute ago   Up About a minute>80/tcp, 443/tcp   webserver
  • Stop and remove containers and images with the following commands. Use the “all” flag (--all or -a) to view stopped containers.
$ docker container ls
$ docker container stop webserver
$ docker container ls -a
$ docker container rm webserver
$ docker image ls
$ docker image rm nginx

Docker is a containerization platform which packages your application and all its dependencies together in the form of containers so as to ensure that your application works seamlessly in any environment be it development or test or production.

Docker containers, wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries etc. anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.

Docker Hub :

Official site :

Docker Pros and Cons

Docker Interview Questions & Answers

What is Docker?

Docker is a containerization platform which packages your application and all its dependencies together in the form of containers so as to ensure that your application works seamlessly in any environment be it development or test or production.

Docker containers, wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries etc. anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.

You can refer the diagram shown below, as you can see that containers run on a single machine share the same operating system kernel, they start instantly as only apps need to start as the kernel is already running and uses less RAM.

What are the differences between Docker and Hypervisors?


What is Docker image?

Docker image is the source of Docker container. In other words, Docker images are used to create containers. Images are created with the build command, and they’ll produce a container when started with run. Images are stored in a Docker registry such as because they can become quite large, images are designed to be composed of layers of other images, allowing a minimal amount of data to be sent when transferring images over the network.

What is Docker container?

Docker containers include the application and all of its dependencies, but share the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud. Now explain how to create a Docker container, Docker containers can be created by either creating a Docker image and then running it or you can use Docker images that are present on the Dockerhub.

Docker containers are basically runtime instances of Docker images.

What is Docker hub?

Docker hub is a cloud-based registry service which allows you to link to code repositories, build your images and test them, stores manually pushed images, and links to Docker cloud so you can deploy images to your hosts. It provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.

How is Docker different from other container technologies?

Docker containers are easy to deploy in a cloud. It can get more applications running on the same hardware than other technologies, it makes it easy for developers to quickly create, ready-to-run containerized applications and it makes managing and deploying applications much easier. You can even share containers with your applications.

What is Docker Swarm?

Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual Docker host. Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.

What is Dockerfile used for?

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.

Can I use json instead of yaml for my compose file in Docker?

You can use json instead of yaml for your compose file, to use json file with compose, specify the filename to use for eg:

docker-compose -f docker-compose.json up

How to create Docker container?

We can use Docker image to create Docker container by using the below command:

docker run -t -i command name

This command will create and start a container.

You should also add, If you want to check the list of all running container with the status on a host use the below command:

docker ps -a

How to stop and restart the Docker container?

In order to stop the Docker container you can use the below command:

docker stop container ID

Now to restart the Docker container you can use:

docker restart container ID

How far do Docker containers scale?

Large web deployments like Google and Twitter, and platform providers such as Heroku and dotCloud all run on container technology, at a scale of hundreds of thousands or even millions of containers running in parallel.

What platforms does Docker run on?

Docker runs on only Linux and Cloud platforms and then I will mention the below vendors of Linux:

  • Ubuntu 12.04, 13.04 et al
  • Fedora 19/20+
  • RHEL 6.5+
  • CentOS 6+
  • Gentoo
  • ArchLinux
  • openSUSE 12.3+
  • CRUX 3.0+


  • Amazon EC2
  • Google Compute Engine
  • Microsoft Azure
  • Rackspace

Mention some commonly used Docker command?

Below are some commonly used Docker commands: Docker-Commands-Docker-Interview-Questions-Edureka-1.png

Docker Network

Networking with standalone containers

Simply if you want to run two or more container in a same network, you might do it like below:

$ docker network create --subnet= dynamodb-local-net
$ docker run -dit --name alpine1 --network alpine-net alpine ash
$ docker network inspect dynamodb-local-net   //who is connected this network

Use the default bridge network

In this example, you start two different alpine containers on the same Docker host and do some tests to understand how they communicate with each other. You need to have Docker installed and running.

  • Open a terminal window. List current networks before you do anything else. Here’s what you should see if you’ve never added a network or initialized a swarm on this Docker daemon. You may see different networks, but you should at least see these (the network IDs will be different):
$ docker network ls

NETWORK ID          NAME                DRIVER              SCOPE
17e324f45964        bridge              bridge              local
6ed54d316334        host                host                local
7092879f2cc8        none                null                local

The default bridge network is listed, along with host and none. The latter two are not fully-fledged networks, but are used to start a container connected directly to the Docker daemon host’s networking stack, or to start a container with no network devices. This tutorial will connect two containers to the bridge network.

  • Start two alpine containers running ash, which is Alpine’s default shell rather than bash. The -dit flags mean to start the container detached (in the background), interactive (with the ability to type into it), and with a TTY (so you can see the input and output). Since you are starting it detached, you won’t be connected to the container right away. Instead, the container’s ID will be printed. Because you have not specified any --network flags, the containers connect to the default bridge network.
$ docker run -dit --name alpine1 alpine ash

$ docker run -dit --name alpine2 alpine ash

Check that both containers are actually started:

$ docker container ls

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
602dbf1edc81        alpine              "ash"               4 seconds ago       Up 3 seconds                            alpine2
da33b7aa74b0        alpine              "ash"               17 seconds ago      Up 16 seconds                           alpine1

Inspect the bridge network to see what containers are connected to it.

$ docker network inspect bridge

Near the top, information about the bridge network is listed, including the IP address of the gateway between the Docker host and the bridge network ( Under the Containers key, each connected container is listed, along with information about its IP address ( for alpine1 and for alpine2).

The containers are running in the background. Use the docker attach command to connect to alpine1.

$ docker attach alpine1

/ #                  <------------------ command line for the container, you can ping internet or other container etc..

The prompt changes to # to indicate that you are the root user within the container. Use the ip addr show command to show the network interfaces for alpine1 as they look from within the container:

# ip addr show

The first interface is the loopback device. Ignore it for now. Notice that the second interface has the IP address, which is the same address shown for alpine1 in the previous step.

  • From within alpine1, make sure you can connect to the internet by pinging The -c 2 flag limits the command to two ping attempts.
# ping -c 2

PING ( 56 data bytes
64 bytes from seq=0 ttl=41 time=9.841 ms
64 bytes from seq=1 ttl=41 time=9.897 ms

--- ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 9.841/9.869/9.897 ms
  • Now try to ping the second container. First, ping it by its IP address,
# ping -c 2

This succeeds. Next, try pinging the alpine2 container by container name. This will fail.

# ping -c 2 alpine2
ping: bad address 'alpine2'
  • Detach from alpine1 without stopping it by using the detach sequence, CTRL + p CTRL + q (hold down CTRL and type p followed by q). If you wish, attach to alpine2 and repeat steps 4, 5, and 6 there, substituting alpine1 for alpine2.
  • Stop and remove both containers.
$ docker container stop alpine1 alpine2
$ docker container rm alpine1 alpine2
****** Remember, the default bridge network is not recommended for production.

docker network document :