Search:

PmWiki

pmwiki.org

edit SideBar

Main / Docker

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.

Docker is similar in concept to a virtual machine, except that it uses fewer resources and provides less isolation.

Docker is a tool that provides a container - i.e. a virtual machine (lite) type of construct, except that it is designed to run only one single process ideally. The advantage is platform portability. Good basic intro here: http://elliot.land/post/docker-explained-simply

Basics

Probably best to use one container matched to one process. Running multiple processes will require a process manager (the container only starts one process), which adds complexity to container startup/shutdown.

Docker Compose is a tool installed separately for helping define and share multi-container (i.e. networked) applications. A YAML file defines services.

Docker doesn't have any built-in support to wait for another container to be fully up, running, and ready before starting another container. This needs to be put inside the app.

ASI uses the Docker Engine on Linux and Windows: https://hub.docker.com/editions/community/docker-ce-server-ubuntu "Docker Engine - Ubuntu (Community) is the best way to install the Docker platform on Ubuntu Linux environments." Free/open source tool is fine for now, no need for extra licensing.

On doing the repo apt installation option, the docker group is created but no users are added to it. You need to use sudo to run Docker commands (by default).

Installation

To install Docker CE (community edition, as opposed to enterprise edition), you need the 64-bit version of one of these Ubuntu versions:

    Zesty 17.04
    Xenial 16.04 (LTS)
    Trusty 14.04 (LTS)

Docker CE is supported on Ubuntu on x86_64, armhf, and s390x (IBM z Systems) architectures.

There's also a lengthy install process... not as simple as just doing an apt-get. https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#install-docker-ce

These are obsolete version names: docker docker-engine docker.io. The latest is called docker-ce.

Notes on Usage

If you need to add debian package dependencies to a package needed for your build, the Dockerfile is the place to do it. In the Buildroot tree it's stored under support/docker/. Here is a sample version:

# Install dependencies
RUN dpkg --add-architecture i386 && apt-get update && \
  DEBIAN_FRONTEND=noninteractive apt-get install -y \
    bc \
    build-essential \
    cpio \
    git \
    graphviz \
    libc6:i386

VOLUME /src
WORKDIR /src

# Pass arguments to make, using the real Makefile
ENTRYPOINT ["/usr/bin/make", "-f", "Makefile.real"]

After you make changes to the list, do make docker

Files/Process

The docker image you wish to build is configured using a file named "Dockerfile".

Dockerfile -> docker build -> docker image :
Whenever you build, you create a docker image and all the images on your machine can be listed with docker images. On Linux the docker image files are stored in /var/lib/docker. The images are not writable, but a container adds a "top writable layer" above an image and all changes are stored in this writable layer, which are lost when the container is deleted.
Simple usage: docker build -t "username:imagename" .
The -t creates a tag to label the image (to be specific, repo:tag). Each new image is also given a SHA256 hex ID.

image -> docker run -> active “running” container
Adding the flags -it to the run command will allow you to attach to the console (root prompt) of the container, very handy for debugging. If the container has exited, you can still see it listed with docker ps -a in an exited state. The container can be restarted with docker start <containername> and you can reconnect to the console with docker attach <containername>.

.yml file -> "docker-compose"

Docker development best practices: https://docs.docker.com/develop/dev-best-practices/

Sharing

We can share our Docker image holding our application with other companies. Even if they actually just use a secure FTP server, rather than a Docker registry (AKA repo).

Key commands for sharing are:
docker save and docker load for an image
docker export (export a container’s filesystem as a tar archive) and docker import for a container
However, trying to share a container appears to be problematic as the importing client docker daemon simply creates an unnamed image from an imported container. More on this; "Unlike images, it isn’t intended to share Docker containers. They’re much larger, containing all sorts of installed applications and configuration information. Also, as we mentioned, they don’t save their state. If you were to try to transfer a Docker container from one computer to another, you’d find that when you started it on the second computer, it would revert back to the original installed image. Instead, if you need to share your work on a container, you should create an image, and share that. Conversely, your work inside a container shouldn’t modify the container. Like previously mentioned, files that you need to save past the end of a container’s life should be kept in a shared folder."

We use the image save command, and the customer will use the image load command:
docker save myimage:latest | gzip > myimage_latest.tar.gz

To load it to the Docker system, use
docker load -i myimage_latest.tar.gz

More official information about sharing media:
"To share Docker images, you have to use a Docker registry. The default registry is Docker Hub and is where all of the images we've used have come from. It’s a cloud-based repo service like Git Hub that is for testing/storing/distributing container images."

GitLab offers something Docker Hub does not:
Self-managed container registry offering - Container registry which is available to be self-installed and self-managed in an organizations data center, co-hosted, or in a chosen cloud provider. https://about.gitlab.com/install/


Page last modified on September 21, 2023, at 05:10 PM