Ingress Controller, MetalLB

A Service definition [eg] collects all pods that have a selector label app=foo and routes traffic evenly among them. However, this service is accessible from inside the cluster only.

[…] Two mechanisms were integrated directly into the Service specification to deal with it. […] You can include a field named type, which takes a value of either NodePort or LoadBalancer.

NodePort type assign a random TCP port and expose it outside the cluster. a client can target any node in the cluster using that port and their messages will be relayed to the right place. The downside is that the port’s value must fall between 30000 and 32767,

LoadBalancer [type] only works if you are operating in a cloud-hosted environment like Google’s GKE or Amazon’s EKS and that a hosted load balancer is spun up for every service with this type, along with a new public IP address, which has additional costs.

Kubernetes API introduced a new type of manifest, called an Ingress. The manifest doesn’t actually do anything on its own; you must deploy an Ingress Controller into your cluster to watch for these declarations.

Ingress controllers are pods, just like any other application, so they’re part of the cluster and can see other pods. They’re built using reverse proxies. Ingress Controllers are susceptible to the same walled-in jail as other Kubernetes pods. You need to expose them to the outside via a Service with a type of either NodePort or LoadBalancer […] one service connected to one Ingress Controller, which, in turn, is connected to many internal pods.

You can install the HAProxy Ingress Controller using Helm, The HAProxy Ingress Controller runs inside a pod in your cluster and uses a Service resource of type NodePort to publish access to external clients.

https://thenewstack.io/kubernetes-ingress-for-beginners/

If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.

Bare-metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services.

https://metallb.universe.tf

MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation, in clusters that don’t run on a cloud provider.

In layer 2 mode, one machine in the cluster takes ownership of the service, and uses standard address discovery protocols (ARP for IPv4, NDPfor IPv6) to make those IPs reachable on the local network.

In BGP mode, all machines in the cluster establish BGP peering sessions with nearby routers that you control, and tell those routers how to forward traffic to the service IPs.

https://metallb.universe.tf/concepts/

ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.

https://github.com/kubernetes/ingress-nginx

Ingress does not support TCP or UDP services. For this reason [nginx] Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap to point to an existing config map where the key is the external port to use and the value indicates the service to expose.

https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/

To expose UDP service via NGINX, you need four things:

1. Add port definition to DaemonSet (by default it only exposes TCP/80 and TCP/443)

2. Run your app

3. Create a service exposing your app

4. Add service definition to ConfigMap udp-services in the ingress-nginx namespace.

https://gist.github.com/superseb/ba6becd1a5e9c74ca17996aa59bcc67e

Regardless of your ingress strategy, you probably will need to start with an external load balancer. This load balancer will then route traffic to a Kubernetes service (or ingress) on your cluster that will perform service-specific routing. In this set up, your load balancer provides a stable endpoint (IP address) for external traffic to access.

https://www.getambassador.io/learn/kubernetes-ingress/kubernetes-ingress-nodeport-load-balancers-and-ingress-controllers/

[…] use Nginx as an Ingress Controller for our cluster combined with MetalLB which will act as a network load-balancer for all incoming communications.

https://blog.dbi-services.com/setup-an-nginx-ingress-controller-on-kubernetes/

To install MetalLB on baremetal, you can do it by installing yaml files or using Helm. In this case we used yaml. A ConfigMag instance has to be created with the config info for MetalLB, mostly layer2 protocol (ARP) and the list of IPs to be used by LoadBalancer instances.

> kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.3/manifests/namespace.yaml
> kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.3/manifests/metallb.yaml
> cat metallb-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.95.208.83-10.95.208.84
> kubectl apply -f metallb-config.yaml

In order to test, you have to create a LoadBalancer with a selector that actually applies to an existing pod. For example:

> cat load-balancer-example.yaml
apiVersion: v1
kind: Service
metadata:
  name: load-balancer-service
spec:
  selector:
    app: example
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
> cat pod-example.yaml
apiVersion: v1
kind: Pod
metadata:
  name: static-web
  labels:
    app: example
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
> kubectl apply -f load-balancer-example.yaml
service/load-balancer-service created
> kubectl apply -f pod-example.yaml
pod/static-web created
> kubectl get services
NAME                             TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                     AGE
load-balancer-service            LoadBalancer   172.19.66.192   10.95.208.83   80:32676/TCP                19s

From previous log, you can check a LoadBalancer instance is created and is assigned 10.95.208.83 IP address. If we do arping to the 10.95.208.83, we can check in MetalLB speaker gets ARP request and respond, from speaker logs:

arping -I br0 10.95.208.83
ARPING 10.95.208.83 from 10.95.208.80 br0
Unicast reply from 10.95.208.83 [A4:BF:01:74:EA:12]  1.553ms
Unicast reply from 10.95.208.83 [A4:BF:01:74:EA:12]  1.381ms

[...]

> kubectl logs -l component=speaker -n metallb-system --since=1m

{"caller":"arp.go:102","interface":"br0","ip":"10.95.208.83","msg":"got ARP request for service IP, sending response","responseMAC":"a4:bf:01:74:ea:12","senderIP":"10.95.208.80","senderMAC":"a4:bf:01:74:e9:9b","ts":"2021-10-15T14:31:14.314805023Z"}

And we can check we can hit test pod on the LoadBalancer IP and port:

wget 10.95.208.83
--2021-10-15 16:31:50-- http://10.95.208.83/
Connecting to 10.95.208.83:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 615 [text/html]
Saving to: ‘index.html’

index.html 100%[============================>] 615 --.-KB/s in 0s

2021-10-15 16:31:50 (196 MB/s) - ‘index.html’ saved [615/615]

Once MetalLB is installed, we can proceed to install NGINX Ingress Controller. This is so because Ingress Controller is exposed by means of LoadBalancer or NodePort:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx

After installation, we can check services deployed and we will find a LoadBalancer instance:

kubectl get services -A

t001-u000003             ingress-nginx-controller                     LoadBalancer   172.19.164.118   10.95.208.83   80:30813/TCP,443:30232/TCP                                                                            19s
t001-u000003             ingress-nginx-controller-admission           ClusterIP      172.19.88.243    <none>         443/TCP

We can finally deploy an Ingress:

> cat ingress-test.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$1
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: service-example
                port:
                  number: 80
> kubectl apply -f ingress-test.yaml

ingress.networking.k8s.io/example-ingress created

> kubectl get ingress -A
NAMESPACE      NAME              CLASS    HOSTS   ADDRESS   PORTS   AGE
default        example-ingress   <none>   *                 80      8s

And check again we can hit the service on LoadBallancer IP and port 80:

[labuser@tip-dev-1 ~]$ wget 10.95.208.83:80
--2021-10-18 16:25:25--  http://10.95.208.83/
Connecting to 10.95.208.83:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 562 [text/html]
Saving to: ‘index.html.2’

100%[====================================================================================================================================================================================================================================================================================>] 562         --.-K/s   in 0s

2021-10-18 16:25:25 (55.1 MB/s) - ‘index.html’ saved [562/562]

Leave a comment