Application Integration

Accelerating Helm and Kubernetes adoption

Axway is moving at full speed with optimizing AMPLIFY™ API Management deployment in Kubernetes. Vincent O’Brien shared a sample Helm chart for API Management last year.  It was a good start for customers moving towards container-based deployment.

Now we’re expanding the sample Helm chart with additional capabilities to make it more appropriate for experimenting in production.

Helm and Kubernetes adoption: taking control of your resources

Kubernetes have a built-in mechanism for proper pod placement among available nodes. Yet, you want to fine-tune this mechanism by providing guidance to Kubernetes. You do it through resource quota and limits.

At the top level you may want to set a maximum amount of resources that you allocate for a specific workload described in Helm chart. In our example, we set such limits for CPU and RAM (there are more objects that you can specify):

Note: Template variables in the curly braces are replaced with actual values at a time of deployment.

It means that if you run multiple workloads on a shared set of nodes, the workload will be bound by the limits provided in ResourceQuota. In addition to the overall limit for the workload, we add limits for each container (the example shows an excerpt from a Deployment manifest):

You don’t need to provide resource limits for each container in your configuration by setting the default values using LimitRange:

Using these values, Kubernetes scheduler will make the correct decision for finding the right nodes for the pods in your workload.

Autoscaling

One of the main benefits of an orchestration engine, like Kubernetes, is the ability to dramatically improve on the operational aspects of a deployment. Autoscaling is a major advantage of Kubernetes. The updated Helm chart incorporates a sample configuration of Horizontal Pod Autoscaler (HPA):

In the shown example, autoscaling is based on the average CPU utilization. Kubernetes periodically check this value across all deployed pods (every 15 sec by default). When average utilization goes above a predefined value (shows as the .Values.autoscaling.averageUtilization variable) HPA creates additional replicas of the required pod. HPA uses container resource limits to correctly calculate a trigger event (average CPU utilization in our example).

Ingress

The last major update in the sample Helm chart is an introduction of the Ingress configuration for the services that need to be exposed outside of a cluster:

The Ingress configuration is based on a popular controller – Nginx. It can be deployed as a pod inside your Kubernetes cluster. However, many cloud providers support an Ingress controller that you can use instead of Nginx. For example, if you’re running your cluster in AWS, you should use AWS ALB. Similar support is available on Azure and Google Cloud.

Conclusion

Kubernetes provides many capabilities that the operations teams can use to optimize the deployment and operation of AMPLIFY Axway API Management. The provided sample Helm chart gives a good example for the Axway customers to start their journey with Axway API Management on Kubernetes. The value.yaml file allows you to provide specific values for the variables per environment. It also allows you to turn on/off certain features, like Ingress.

It would be nice to learn from you, our customers. Please, share any feedback or your version of the Helm chart for Axway API Management in the comment section below.