Learnt the hard way…

The [linux] volume manager also allows reducing the amount of disk space allocated to a logical volume, but there are a couple requirements. First, the volume must be unmounted. Second, the filesystem itself must be reduced in size before the volume on which it resides can be reduced.

https://opensource.com/business/16/9/linux-users-guide-lvm

Each volume within a volume group is segmented into small, fixed-size chunks called extents. The size of the extents is determined by the volume group (all volumes within the group conform to the same extent size).

The extents on a physical volume are called physical extents, while the extents of a logical volume are called logical extents. A logical volume is simply a mapping that LVM maintains between logical and physical extents.

The logical extents that are presented as a unified device by LVM do not have to map to continuous physical extents.

https://www.digitalocean.com/community/tutorials/an-introduction-to-lvm-concepts-terminology-and-operations

Sync with ntp server in CentOS

From https://www.thegeekdiary.com/centos-rhel-how-to-configure-ntp-server-and-client/.

>> sudo yum install ntp

>> cat /etc/ntp.conf
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict -6 ::1
server *** PUT SERVER IP/DN here ***
driftfile /var/lib/ntp/drift
keys /etc/ntp/keys

>> sudo systemctl enable ntpd
>> sudo systemctl start ntpd

Mount disk image partition

In case NDB (https://gist.github.com/shamil/62935d9b456a6f9877b5) is not available, eg CentOS7, you can try this.

From https://www.linuxquestions.org/questions/linux-general-1/how-to-mount-img-file-882386/, first get partition starting sector:

[labuser@tip-dev-1 ~]$ sudo fdisk -l disk-ia.img

Disk disk-ia.img: 113.6 GB, 113561141760 bytes, 221799105 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x419f5b94

      Device Boot      Start         End      Blocks   Id  System
disk-ia.img1   *        2048     1050623      524288    b  W95 FAT32
disk-ia.img2         1052670   209713151   104330241    5  Extended
disk-ia.img5         1052672   209713151   104330240   83  Linux

If we are interested on disk-ia.img5 partition, then use 1052672 x 512 = 538968064 so we can mount the partition using:

sudo mount -o loop,offset=538968064 disk-ia.img /mnt/

Finally, when you are done, just umount /mnt. If you want, you can check (losetup –list) that umount command destroys the loop device created, just in case you have any doubt.

nVidia GPUs

The NVIDIA GPU Operator [is] based on the operator framework and automates the management of all NVIDIA software components needed to provision […] GPU worker nodes in a Kubernetes cluster – the driver, container runtime, device plugin and monitoring.

The GPU operator should run on nodes that are equipped with GPUs. To determine which nodes have GPUs, the operator relies on Node Feature Discovery(NFD) within Kubernetes.

https://developer.nvidia.com/blog/nvidia-gpu-operator-simplifying-gpu-management-in-kubernetes/

NVIDIA Container Runtime is a GPU aware container runtime, compatible with the Open Containers Initiative (OCI) specification used by Docker, CRI-O.

https://developer.nvidia.com/nvidia-container-runtime

Starting a GPU enabled CUDA container […] and specify the nvidia runtime:

[edited] $ docker run –rm –runtime=nvidia –gpus=all nvcr.io/nvidia/cuda:latest nvidia-smi

GPUs can be specified to the Docker CLI using either the --gpus option starting with Docker 19.03 or using the environment variable NVIDIA_VISIBLE_DEVICES. This variable controls which GPUs will be made accessible inside the container.

https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/user-guide.html

GPU device handling

For VMs:

[PCI passthrough] enables a guest to directly use physical PCI devices on the host, even if host does not have drivers for this particular device.

https://docs.oracle.com/en/virtualization/virtualbox/6.0/admin/pcipassthrough.html

For containers:

Make sure you have installed the NVIDIA driver and Docker engine for your Linux distribution. Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed.

https://github.com/NVIDIA/nvidia-docker