jdeps is a tool that can be used to analyze the dependencies of a jar file and generate a list of the modules that are needed to run the application.
jlink is a tool that can be used to create a custom runtime image that contains only the modules that are needed to run your application.
Tag: docker
User inside docker
I built a Docker image that has a user named “appuser” and this user has a defined uid of 1001. On my test server, the account I’m using is named “marc”, and it also has the uid of 1001. When I start the container, the sleep command executes as appuser, because the Dockerfile contains the line “USER appuser”. But this really doesn’t make it run as appuser, it makes it run as the uid of the user that the Docker images knows as appuser.
https://medium.com/@mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf
Docker in Docker
/var/run/docker.sockis the default Unix socket. […] Docker daemon by default listens to docker.sock.To run docker inside docker, all you have to do it just run docker with the default Unix socket
docker.sockas a volume.
docker run -v /var/run/docker.sock:/var/run/docker.sock \ -ti dockerNow, from within the container, you should be able to execute docker commands for building and pushing images to the registry.
[…] the actual docker operations happen on the VM host running your base docker container rather than from within the container.
https://devopscube.com/run-docker-in-docker/
kaniko is an open-source container image-building tool created by Google. […] all the image-building operations happen inside the Kaniko container’s userspace.
https://devopscube.com/build-docker-image-kubernetes-pod/
nVidia GPUs
The NVIDIA GPU Operator [is] based on the operator framework and automates the management of all NVIDIA software components needed to provision […] GPU worker nodes in a Kubernetes cluster – the driver, container runtime, device plugin and monitoring.
The GPU operator should run on nodes that are equipped with GPUs. To determine which nodes have GPUs, the operator relies on Node Feature Discovery(NFD) within Kubernetes.
https://developer.nvidia.com/blog/nvidia-gpu-operator-simplifying-gpu-management-in-kubernetes/
NVIDIA Container Runtime is a GPU aware container runtime, compatible with the Open Containers Initiative (OCI) specification used by Docker, CRI-O.
https://developer.nvidia.com/nvidia-container-runtime
Starting a GPU enabled CUDA container […] and specify the nvidia runtime:
[edited] $ docker run –rm –runtime=nvidia –gpus=all nvcr.io/nvidia/cuda:latest nvidia-smi
GPUs can be specified to the Docker CLI using either the
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/user-guide.html--gpusoption starting with Docker19.03or using the environment variableNVIDIA_VISIBLE_DEVICES. This variable controls which GPUs will be made accessible inside the container.
GPU device handling
For VMs:
[PCI passthrough] enables a guest to directly use physical PCI devices on the host, even if host does not have drivers for this particular device.
https://docs.oracle.com/en/virtualization/virtualbox/6.0/admin/pcipassthrough.html
For containers:
Make sure you have installed the NVIDIA driver and Docker engine for your Linux distribution. Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed.
https://github.com/NVIDIA/nvidia-docker
Local docker registry
Use a command like the following to start the registry container:
https://docs.docker.com/registry/deploying/
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2