Details about how to use Kong in Docker can be found on the DockerHub repository hosting the image: kong. We also have a Docker Compose template with built-in orchestration and scalability.
May 21, 2020 This tutorial covers how to install Docker on an Ubuntu 20.04 machine. Docker is an open-source containerization platform that allows you to quickly build, test, and deploy applications as portable containers that can run virtually anywhere. To make GPU available in the container, attach the GPU to the container using -device /dev/dri option and run the container: docker run -it -rm -device /dev/dri NOTE: If your host system is Ubuntu 20, follow the Configuration Guide for the Intel® Graphics Compute Runtime for OpenCL™ on Ubuntu. 20.04.
With a Database
Here is a quick example showing how to connect a Kong container to a Cassandra or PostgreSQL container.
Sep 08, 2020 Now let’s install Docker on Ubuntu 20.04. Run the following command in the terminal window: sudo apt install docker.io. Type y and hit Enter to confirm the installation. Once the install is completed, the output notifies you Docker has been installed. The list returned depends on which repositories are enabled, and is specific to your version of CentOS (indicated by the.el7 suffix in this example). Install a specific version by its fully qualified package name, which is the package name (docker-ce) plus the version string (2nd column) starting at the first colon (:), up to the first hyphen, separated by a hyphen (-). Nov 19, 2019 Docker Compose is a tool used to define and run multi-container Docker applications. Users utilize this software to launch, execute, communicate, and close containers with a single coordinated command. This tutorial will show you how to install Docker Compose on CentOS 7.
Create a Docker network
You will need to create a custom network to allow the containers to discover and communicate with each other. In this example
kong-net
is the network name, you can use any name.Start your database
If you wish to use a Cassandra container:
If you wish to use a PostgreSQL container:
Prepare your database
Run the migrations with an ephemeral Kong container:
In the above example, both Cassandra and PostgreSQL are configured, but you should update the
KONG_DATABASE
environment variable with eithercassandra
orpostgres
.Note for Kong < 0.15: with Kong versions below 0.15 (up to 0.14), use the
up
sub-command instead ofbootstrap
. Also note that with Kong < 0.15, migrations should never be run concurrently; only one Kong node should be performing migrations at a time. This limitation is lifted for Kong 0.15, 1.0, and above.Start Kong
When the migrations have run and your database is ready, start a Kong container that will connect to your database container, just like the ephemeral migrations container:
Use Kong
Kong is running:
Quickly learn how to use Kong with the 5-minute Quickstart.
DB-less mode
The steps involved in starting Kong in DB-less mode are the following:
Create a Docker network
This is the same as in the Pg/Cassandra guide. We’re also using
kong-net
as the network name and it can also be changed to something else.This step is not strictly needed for running Kong in DB-less mode, but it is a good precaution in case you want to add other things in the future (like a rate-limiting plugin backed up by a Redis cluster).
Create a Docker volume
For the purposes of this guide, a Docker Volume is a folder inside the host machine which can be mapped into a folder in the container. Volumes have a name. In this case we’re going to name ours
kong-vol
You should be able to inspect the volume now:
The result should be similar to this:
Notice the
MountPoint
entry. We will need that path in the next step.Prepare your declarative configuration file
The syntax and properties are described on the Declarative Configuration Format guide.
Add whatever core entities (Services, Routes, Plugins, Consumers, etc) you need there.
On this guide we’ll assume you named it
kong.yml
.Save it inside the
MountPoint
path mentioned in the previous step. In the case of this guide, that would be/var/lib/docker/volumes/kong-vol/_data/kong.yml
Start Kong in DB-less mode
Although it’s possible to start the Kong container with just
KONG_DATABASE=off
, it is usuallydesirable to also include the declarative configuration file as a parameter via theKONG_DECLARATIVE_CONFIG
variable name. In order to do this, we need to make the file“visible” from within the container. We achieve this with the-v
flag, which mapsthekong-vol
volume to the/usr/local/kong/declarative
folder in the container.Use Kong
Kong should be running and it should contain some of the entities added in kong.yml:
For example, get a list of services:
Follow Up:
The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel® Distribution of OpenVINO™ toolkit includes the Intel® Deep Learning Deployment Toolkit.
This guide provides the steps for creating a Docker* image with Intel® Distribution of OpenVINO™ toolkit for Linux* and further installation.
Install Curl Docker Container Tracking
System Requirements
Target Operating Systems
- Ubuntu* 18.04 long-term support (LTS), 64-bit
- Ubuntu* 20.04 long-term support (LTS), 64-bit
- CentOS* 7.6
- Red Hat* Enterprise Linux* 8.2 (64 bit)
Host Operating Systems
- Linux with installed GPU driver and with Linux kernel supported by GPU driver
Prebuilt images
Prebuilt images are available on:
Use Docker* Image for CPU
- Kernel reports the same information for all containers as for native application, for example, CPU, memory information.
- All instructions that are available to host process available for process in container, including, for example, AVX2, AVX512. No restrictions.
- Docker* does not use virtualization or emulation. The process in Docker* is just a regular Linux process, but it is isolated from external world on kernel level. Performance penalty is small.
Build a Docker* Image for CPU
You can use available Dockerfiles or generate a Dockerfile with your setting via DockerHub CI Framework for Intel® Distribution of OpenVINO™ toolkit. The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit.
Run the Docker* Image for CPU
Run the image with the following command:
Use a Docker* Image for GPU
Build a Docker* Image for GPU
Prerequisites:
- GPU is not available in container by default, you must attach it to the container.
- Kernel driver must be installed on the host.
- Intel® OpenCL™ runtime package must be included into the container.
- In the container, non-root user must be in the
video
andrender
groups. To add a user to the render group, follow the Configuration Guide for the Intel® Graphics Compute Runtime for OpenCL™ on Ubuntu* 20.04.
Before building a Docker* image on GPU, add the following commands to a Dockerfile:
Ubuntu 18.04/20.04:
CentOS 7/RHEL 8:
Run the Docker* Image for GPU
To make GPU available in the container, attach the GPU to the container using --device /dev/dri
option and run the container:
NOTE: If your host system is Ubuntu 20, follow the Configuration Guide for the Intel® Graphics Compute Runtime for OpenCL™ on Ubuntu* 20.04.
Use a Docker* Image for Intel® Neural Compute Stick 2
Build and Run the Docker* Image for Intel® Neural Compute Stick 2
Known limitations:
- Intel® Neural Compute Stick 2 device changes its VendorID and DeviceID during execution and each time looks for a host system as a brand new device. It means it cannot be mounted as usual.
- UDEV events are not forwarded to the container by default it does not know about device reconnection.
- Only one device per host is supported.
Use one of the following options as Possible solutions for Intel® Neural Compute Stick 2:
Option #1
- Get rid of UDEV by rebuilding
libusb
without UDEV support in the Docker* image (add the following commands to aDockerfile
):- Ubuntu 18.04/20.04: automakelibtooludev'apt-get install -y --no-install-recommends ${BUILD_DEPENDENCIES} &&RUN curl -L https://github.com/libusb/libusb/archive/v1.0.22.zip --output v1.0.22.zip &&RUN ./bootstrap.sh &&make -j4WORKDIR /opt/libusb-1.0.22/libusb/bin/bash ../libtool --mode=install /usr/bin/install -c libusb-1.0.la '/usr/local/lib' &&/bin/mkdir -p '/usr/local/include/libusb-1.0' &&/usr/bin/install -c -m 644 libusb.h '/usr/local/include/libusb-1.0' &&RUN /usr/bin/install -c -m 644 libusb-1.0.pc '/usr/local/lib/pkgconfig' &&cp /opt/intel/openvino_2021/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ &&
- CentOS 7: automakeunzipRUN yum update -y && yum install -y ${BUILD_DEPENDENCIES} &&yum clean all && rm -rf /var/cache/yumWORKDIR /optRUN curl -L https://github.com/libusb/libusb/archive/v1.0.22.zip --output v1.0.22.zip &&RUN ./bootstrap.sh &&make -j4WORKDIR /opt/libusb-1.0.22/libusb/bin/bash ../libtool --mode=install /usr/bin/install -c libusb-1.0.la '/usr/local/lib' &&/bin/mkdir -p '/usr/local/include/libusb-1.0' &&/usr/bin/install -c -m 644 libusb.h '/usr/local/include/libusb-1.0' &&printf 'nexport LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/libn' >> /opt/intel/openvino_2021/bin/setupvars.shWORKDIR /opt/libusb-1.0.22/RUN /usr/bin/install -c -m 644 libusb-1.0.pc '/usr/local/lib/pkgconfig' &&cp /opt/intel/openvino_2021/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ &&
- Ubuntu 18.04/20.04:
- Run the Docker* image: docker run -it --rm --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb <image_name>
Docker Container Linux
Option #2
Run container in the privileged mode, enable the Docker network configuration as host, and mount all devices to the container:
NOTES:
- It is not secure.
- Conflicts with Kubernetes* and other tools that use orchestration and private networks may occur.
Use a Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
Build Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
To use the Docker container for inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs:
- Set up the environment on the host machine, that is going to be used for running Docker*. It is required to execute
hddldaemon
, which is responsible for communication between the HDDL plugin and the board. To learn how to set up the environment (the OpenVINO package or HDDL package must be pre-installed), see Configuration guide for HDDL device or Configuration Guide for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs. - Prepare the Docker* image (add the following commands to a Dockerfile).
- Ubuntu 18.04: RUN apt-get update &&libboost-filesystem1.65-devlibjson-c3 libxxf86vm-dev &&
- Ubuntu 20.04: RUN apt-get update &&libboost-filesystem-devlibjson-c4rm -rf /var/lib/apt/lists/* && rm -rf /tmp/*
- CentOS 7: RUN yum update -y && yum install -yboost-threadboost-systemboost-date-timeboost-atomiclibXxf86vm-devel &&
- Ubuntu 18.04:
- Run
hddldaemon
on the host in a separate terminal session using the following command:
Run the Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
To run the built Docker* image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, use the following command:
NOTES:
- The device
/dev/ion
need to be shared to be able to use ion buffers among the plugin,hddldaemon
and the kernel. - Since separate inference tasks share the same HDDL service communication interface (the service creates mutexes and a socket file in
/var/tmp
),/var/tmp
needs to be mounted and shared among them.
In some cases, the ion driver is not enabled (for example, due to a newer kernel version or iommu incompatibility). lsmod | grep myd_ion
returns empty output. To resolve, use the following command:
NOTES:
- When building docker images, create a user in the docker file that has the same UID and GID as the user which runs hddldaemon on the host.
- Run the application in the docker with this user.
- Alternatively, you can start hddldaemon with the root user on host, but this approach is not recommended.
Run Demos in the Docker* Image
To run the Security Barrier Camera Demo on a specific inference device, run the following commands with the root privileges (additional third-party dependencies will be installed):
CPU:
GPU:
MYRIAD:
HDDL:
Use a Docker* Image for FPGA
Intel will be transitioning to the next-generation programmable deep-learning solution based on FPGAs in order to increase the level of customization possible in FPGA deep-learning. As part of this transition, future standard releases (i.e., non-LTS releases) of Intel® Distribution of OpenVINO™ toolkit will no longer include the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA.
Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS release will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. For questions about next-generation programmable deep-learning solutions based on FPGAs, please talk to your sales representative or contact us to get the latest FPGA updates.
For instructions for previous releases with FPGA Support, see documentation for the 2020.4 version or lower.
Troubleshooting
If you got proxy issues, please setup proxy settings for Docker. See the Proxy section in the Install the DL Workbench from Docker Hub* topic.
Additional Resources
- DockerHub CI Framework for Intel® Distribution of OpenVINO™ toolkit. The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit. You can reuse available Dockerfiles, add your layer and customize the image of OpenVINO™ for your needs.
- Intel® Distribution of OpenVINO™ toolkit home page: https://software.intel.com/en-us/openvino-toolkit
- OpenVINO™ toolkit documentation: https://docs.openvinotoolkit.org
- Intel® Neural Compute Stick 2 Get Started: https://software.intel.com/en-us/neural-compute-stick/get-started
- Intel® Distribution of OpenVINO™ toolkit Docker Hub* home page: https://hub.docker.com/u/openvino