AGIC and let’s encrypt

AGIC helps eliminate the need to have another load balancer/public IP in front of the AKS cluster and avoids multiple hops in your datapath before requests reach the AKS cluster. Application Gateway talks to pods using their private IP directly and does not require NodePort or KubeProxy services. This also brings better performance to your deployments.

https://docs.microsoft.com/en-us/azure/application-gateway/ingress-controller-overview
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: aspnetapp
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: aspnetapp
servicePort: 80

ingress-shim watches Ingress resources across your cluster. If it observes an Ingress with annotations described in the Supported Annotations section, it will ensure a Certificate resource with the name provided in the tls.secretName field and configured as described on the Ingress exists.

https://cert-manager.io/docs/usage/ingress/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
# add an annotation indicating the issuer to use.
cert-manager.io/cluster-issuer: nameOfClusterIssuer
name: myIngress
namespace: myIngress
spec:
rules:
- host: example.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: myservice
port:
number: 80
tls: # < placing a host in the TLS config will determine what ends up in the cert's subjectAltNames
- hosts:
- example.com
secretName: myingress-cert # < cert-manager will store the created certificate in this secret.

The ACME Issuer type represents a single account registered with the Automated Certificate Management Environment (ACME) Certificate Authority server. When you create a new ACME Issuer, cert-manager will generate a private key which is used to identify you with the ACME server.

https://cert-manager.io/docs/configuration/acme/#solving-challenges

SSL is handled by the ingress controller, not the ingress resource. Meaning, when you add TLS certificates to the ingress resource as a kubernetes secret, the ingress controller access it and makes it part of its configuration.

https://devopscube.com/configure-ingress-tls-kubernetes/
apiVersion: v1
kind: Secret
metadata:
name: hello-app-tls
namespace: dev
type: kubernetes.io/tls
data:
server.crt: |
<crt contents here>
server.key: |
<private key contents here>

You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress point (traffic to the Service and its Pods is in plaintext).

Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain Name (FQDN) for https-example.foo.com.

https://kubernetes.io/docs/concepts/services-networking/ingress/

I found this error when installing helm chart and creating Issuer:

custom resource (ClusterIssuer) takes some time to properly register in API server. In case when CRD ClusterIssuer was just created and then immediately one tries to create custom resource — this error will happen.

https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/72

so I have to split cert-manager chart installation from Cluster Issuer creation.

NAPI updates…

Software interrupts (softirq) or bottom halves are a kernel concept which helps decrease interrupt service latency. Because normal interrupts don’t nest in Linux, the system can’t service any new interrupt while it’s already processing one. Therefore doing a lot of work directly in an IRQ handler is a bad idea. softirqs are a form of processing which allows the IRQ handler to schedule a function to run as soon as IRQ handler exits. This adds a tier of “low latency processing” which does not block hardware interrupts. If software interrupts start consuming a lot of cycles, however, kernel will wake up a ksoftirq thread to take over the I/O portion of the processing. This helps back-pressure the I/O, and makes sure random threads don’t get their scheduler slice depleted by softirq work.

https://people.kernel.org/kuba/napi-updates

Linux used to support nested interrupts but this was removed some time ago in order to avoid increasingly complex solutions to stack overflows issues – allow just one level of nesting, allow multiple levels of nesting up to a certain kernel stack depth, etc.

https://linux-kernel-labs.github.io/refs/heads/master/lectures/interrupts.html

MTU setting

To control the MTU used with a subnet, maybe because you have an ipsec tunnel in the path, you can set MTU on the ip route itself:

10.95.208.0/26 via 172.31.16.1 dev br0 advmss 1300

From the doc:

advmss NUMBER (Linux 2.3.15+ only) the MSS (‘Maximal Segment Size’) to advertise to these destinations when establishing TCP connections. If it is not given, Linux uses a default value calculated from the first hop device MTU. (If the path to these destination is asymmetric, this guess may be wrong.)

https://www.systutorials.com/docs/linux/man/8-ip-route/

Pod namespaces

By default, Pods are non-isolated, and they accept traffic from any source including other Pods present within the cluster. Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. Network policies affect only Pod-to-Pod communication and do not affect service-to-service traffic directly. Network policies use labels specified within the PodSelector attribute of their definition to select the Pods on which the associated traffic rules will be enforced.

https://docs.robin.io/platform/latest/manage_network.html?highlight=interface#how-it-works

Docker in Docker

/var/run/docker.sock is the default Unix socket. […] Docker daemon by default listens to docker.sock.

To run docker inside docker, all you have to do it just run docker with the default Unix socket docker.sock as a volume.

docker run -v /var/run/docker.sock:/var/run/docker.sock \ -ti docker

Now, from within the container, you should be able to execute docker commands for building and pushing images to the registry.

[…] the actual docker operations happen on the VM host running your base docker container rather than from within the container.

https://devopscube.com/run-docker-in-docker/

kaniko is an open-source container image-building tool created by Google. […] all the image-building operations happen inside the Kaniko container’s userspace.

https://devopscube.com/build-docker-image-kubernetes-pod/

Expose pod ports locally

[…]  kubectl port-forward allows using resource name, such as a pod name, to select a matching pod to port forward to.

kubectl port-forward mongo-75f59d57f4-4nd6q 28015:27017 [or]

kubectl port-forward pods/mongo-75f59d57f4-4nd6q 28015:27017 [or]

kubectl port-forward deployment/mongo 28015:27017 [or]

kubectl port-forward replicaset/mongo-75f59d57f4 28015:27017 [or]

https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/

Share files using AWS

If you want to provide temporary access to an object that’s otherwise private, you can gen- erate a presigned URL. The URL will be usable for a specified period of time, after which it will become invalid. You can build presigned URL generation into your code to provide object access programmatically.
The following AWS CLI command will return a URL that includes the required authen- tication string. The authentication will become invalid after 10 minutes (600 seconds). The default expiration value is 3,600 seconds (one hour).
aws s3 presign s3://MyBucketName/PrivateObject –expires-in 600

AWS Certified Solutions Architect Study Guide: Associate SAA-C02 Exam (Aws Certified Solutions Architect Official: Associate Exam) (English Edition)