image1 image2 image3

NISHI'S TECH BLOG|HELLO I'M NISHADI KIRIELLE|WELCOME TO MY TECH BLOG

Load Balancing Kubernetes Services and Enabling Session affinity

As Kubernetes is an open source cluster management system to run and manage containerized applications, the users need a way to expose the services created in Kubernetes cluster to be accessible from outside. 

Kubernetes built-in mechanisms to expose services in Kubernetes cluster to external traffic, provide layer 4 load balancing for the Kubernetes cluster. 


This post is intended to demonstrate an existing problem in session affinity in kubernetes, when the kubernetes services are load balanced though ingress controllers and to explain the solution provided by kubernetes as a solution.


Existing Methods for exposing Kubernetes Services

Kubernetes provides a couple of methods for exposing services to the external traffics. NodePort and LoadBalancer can be specified via ServiceType attribute in service definition yaml/json file. Another approach introduced by Kubernetes release 1.1 is Ingress API


NodePort

By defining ServiceType as NodePort and specifying a port number as nodePort will expose the specified service in all nodes with the specified nodePort number. In order to expose that service to internet, the user can expose one or more nodes on that port. When incoming traffic hits a node on that port, the load get load balanced among the pods of the service. 


LoadBalancer

In declaring the ServiceType as LoadBalacer, helps to distribute the incoming traffic among the pods of the service through a cloud load balancer. This approach is supported only by certain cloud providers and Google Container Engine.

Ingress API

An ingress is a collection of rules that enables the user to expose services to internet through custom URLs and multiple virtual hosts. 

Simple Fanout Ingress Definition

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - path: /foo
        backend:
          serviceName: s1
          servicePort: 80
      - path: /bar
        backend:
          serviceName: s2
          servicePort: 80

This definition allows the user to access the service s1 at the URL /foo and the service s2 to be accessed at URL /bar.


Name Based Virtual Hosting


apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - backend:
          serviceName: s1
          servicePort: 80
  - host: bar.foo.com
    http:
      paths:
      - backend:
          serviceName: s2
          servicePort: 80

This definition allows the user to access the service s1 at the foo.bar.com host name and the other service at the bar.foo.com.


In order to use Ingress API to expose services, one approach is to use an Ingress Controller. The main responsibility of an Ingress controller is to watch the API servers /ingresses endpoint for new ingresses. The existing ingress controller uses nginx load balancer and updates the nginx configuration file according to the ingress definition.


For an example if we deploy nginx-alpha ingress controller and create the above mentioned simple fanout example ingress definition, the ingress controller would generate nginix.conf file as follows and reload nginx server.




events {
  worker_connections 1024;
}
http {
  server {
    listen 80;
    server_name foo.bar.com;
    resolver 127.0.0.1;

    location /foo {
      proxy_pass http://s1;
    }
    location /bar {
      proxy_pass http://s2;
    }
  }
}


Problem with session affinity in using an ingress controller


What is session affinity?
Session affinity is a feature that the requests from the same client always get routed back to the same server within a cluster of servers.

How to enable session affinity in a kubernetes service?
To enable the session affinity in kubernetes, we can add the following to the service definition. 


service.spec.sessionAffinity to "ClientIP"


The existing problem with session affinity and load balancing


With this approach the nginx ingress controller load balances the external traffic to the service. The requests from all clients come to the service through the load balancer. The enabled session affinity in service takes into consideration the client IP to handle session affinity. Since the service receives all the requests from the load balancer, it sees the same client IP for all the requests. Thus the service directs the requests from all clients to the same pod since it is unable to differentiate between the clients. This scenario will overload a pod and it would problems for the running application. 

The Solution...

The solution is to directly load balance to the pods without load balancing the traffic to the service. This functionality is implemeted with service-loadbalancer in kubernetes. This implementation uses haproxy to enable session affinity and directly load balance the external traffic to the pods without going through services. 






In order to create this HA Proxy loadbalancer, you can use https://github.com/kubernetes/contrib/tree/master/service-loadbalancer

By creating the replication controller in that repo named as rc.yaml , you can create the HAProxy load balancing pod.



$ kubectl create -f rc.yaml

This command will create the ha proxy load balabncer pod, but it would not come to the running state until the nodes that need to be exposed via ingress resource are tagged with



role=loadbalancer
 tag.

In order to tag the targeted nodes run the below command.




$ kubectl label node <node-ip> role=loadbalancer

This will run the load-balancer in tagged nodes. Thus when we create an ingress resource, the load-balancer would distribute the external traffic to the pods according to the ingress definition.




References :

  1. Kubernetes service load balancer
  2. Kubernetes services
  3. Kubernetes ingress controllers
Special thanks for the guidance and support given by Mr. Imesh Gunaratne, Product Lead, WSO2 PPaaS

Share this:

CONVERSATION

6 comments: