Configuring HAProxy Load Balancer for WSO2 AppCloud
Load balancing in WSO2 app cloud's Kubernetes Cluster is configured via HAProxy load balancer. In order to load balance in the Kubernetes cluster, we need to update the HAProxy configuration file with newly created applications details in real time. Thus, to achieve that what is used in the app cloud is a feature that provides load balancing to using services in Kubernetes Contrib repo.
This blog post will guide you on how to configure a HAProxy load balancer in WSO2 appcloud.
- Get the kubernetes contrib repo to your local machine
$go get github.com/kubernetes/contrib.git
- Move to the service load balancer
kubernetes/contrib/service-loadbalancer
- In order to enable HTTPS via the load balancer, it needs to add proper CA certificates file, worker nodes' certificate file and the key file to the load balancer. Thus, to include these files in the loadbalancing kubernetes pod, we need to include them in the docker image. Follow the below guidelines to add these files to the docker image of HAProxy service load-balancer. For this tutorial, we assume that the CA certificate is ca.crt, SSL certificate of the worker node is ssl.crt and the private key is ssl.key.
- Combine the SSL certificate file and the private key of the worker node to a single .pem file
$ cat ssl.crt ssl.key \
| tee ssl.pem
- Locate the obtained ssl.pem file and ca.crt file in the service loadbalancer folder.
- Update the Docker file to add the certificates
##kubernetes/contrib/service-loadbalancer/Dockerfile
FROM gcr.io/google_containers/haproxy:0.2
MAINTAINER Prashanth B <beeps@google.com>
RUN mkdir -p /etc/haproxy/errors /var/state/haproxy
RUN for ERROR_CODE in 400 403 404 408 500 502 503 504;do curl -sSL -o /etc/haproxy/errors/$ERROR_CODE.http \
https://raw.githubusercontent.com/haproxy/haproxy-1.5/master/examples/errorfiles/$ERROR_CODE.http;done
RUN wget -O /sbin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.0.0/dumb-init_1.0.0_amd64 && \
chmod +x /sbin/dumb-init
ENTRYPOINT ["dumb-init", "/service_loadbalancer"]
ADD haproxy.cfg /etc/haproxy/haproxy.cfg
ADD service_loadbalancer service_loadbalancer
ADD service_loadbalancer.go service_loadbalancer.go
ADD template.cfg template.cfg
ADD loadbalancer.json loadbalancer.json
ADD haproxy_reload haproxy_reload
ADD README.md README.md
ADD ssl.pem /etc/haproxy/ssl.pem
ADD ca.crt /etc/haproxy/ca.crt
RUN touch /var/run/haproxy.pid
- Now that you have added the corresponding certificate files to the loadbalancing docker file and we need to build it with service loadbalncer. Thus run the 'make' command to run the make file located in 'kubernetes/contrib/service-loadbalancer/Make'
apiVersion: v1
kind: ReplicationController
metadata:
name: service-loadbalancer
labels:
app: service-loadbalancer
version: v1
spec:
replicas: 1
selector:
app: service-loadbalancer
version: v1
template:
metadata:
labels:
app: service-loadbalancer
version: v1
spec:
nodeSelector:
role: loadbalancer
containers:
- image: nishadi/lb-wso2-appcloud-prod:0.1
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 8081
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
name: haproxy
ports:
# All http services
- containerPort: 80
hostPort: 80
protocol: TCP
# nginx https
- containerPort: 443
hostPort: 8443
protocol: TCP
# mysql
- containerPort: 3306
hostPort: 3306
protocol: TCP
# haproxy stats
- containerPort: 1936
hostPort: 1936
protocol: TCP
resources: {}
args:
- --tcp-services=mysql:3306,nginxsvc:443
- --ssl-cert= /etc/haproxy/ssl.pem
- --ssl-ca-cert=/etc/haproxy/ca.crt
We can deploy the above replication controller using the below command.
$ kubectl create -f ./rc.yaml
replicationcontrollers/service-loadbalancer
$ kubectl get pods -l app=service-loadbalancer
NAME READY STATUS RESTARTS AGE
service-loadbalancer-dapxv 0/2 Pending 0 1m
$ kubectl describe pods -l app=service-loadbalancer
Events:
FirstSeen From Reason Message
Tue, 21 Jul 2015 11:19:22 -0700 {scheduler } failedScheduling Failed for reason MatchNodeSelector and possibly others
The above mentioned error would stop the pod from starting because the scheduler is waiting for you to tell it which nodes to use as a load balancer. Thus we need to label the node with the corresponding node selector defined in the rc.yaml.
$ kubectl label node e2e-test-beeps-minion-c9up role=loadbalancer
NAME LABELS STATUS
e2e-test-beeps-minion-c9up kubernetes.io/hostname=e2e-test-beeps-minion-c9up,role=loadbalancer Ready
This will start the HAProxy load balancing pod in the kubernetes cluster.
References:
Kubernetes Service LoadBalancer