Ingresses at Smarkets

Nithya Shree Rajashekhar
Smarkets HQ
Published in
4 min readJan 13, 2022

--

In modern environments, applications are developed in microservice architectural style with smaller scope for faster-delivery life cycles. Kubernetes is an orchestration platform that supports managing microservice workloads reliably.

It is essential for these microservices to communicate with each other in an invariable and human-readable format (DNS) among themselves and/or to be exposed to the world. In this blog, we will discuss how we can abstract a logical set of Pods as an API, how they can be made accessible from the internet and how Smarkets implement this for cloud and bare-metal Kubernetes clusters. Finally, we look at how we access such ingresses using a zero-trust security model.

Service

Kubernetes’ Service resource enables us to expose a set of Pods (denoted as Endpoints) grouped based on their matching labels spec. Cluster internal communication is made easier with the default Service type ClusterIP where an IP from service-cluster-ip-range is allocated, it is referenced by <service_name> within its namespace and <service_name>.<namespace> from other namespaces. NodePort Service type can be used to connect from outside the kube cluster, where an arbitrary port on the worker nodes is chosen to expose the Service (powered by kube-proxy), further extending to a LoadBalancer type Service that works with respective cloud providers to provision the LoadBalancer.

Ingress

As the need for end-user URLs and URIs grow, with Service resources alone the management overhead increases and gets expensive. Kubernetes natively provides Ingress objects, which acts as a layer of LoadBalancer within the cluster routing to Services based on host and path rules defined in the manifest. Ingress resources still need to be exposed to the external world and should be explicitly managed by an ingress controller (which is not bundled within the universal kube-controller-manager itself). Ingress resource is adopted by an ingress controller based on IngressClass configured through .spec.ingressClassName from Kubernetes 1.18 and previously through kubernetes.io/ingress.class annotation.

Here are three of the ingress controllers we use:

  1. ingress-nginx controller deployment comes with a LoadBalancer type Service to reverse proxy incoming requests. It generates nginx configuration based on the ingress object’s rule set. The requests route through nginx (running inside controller workload) to reach the destination Service. It is cost-saving as it can serve many ingress definitions, although the disadvantage is that there are reloads when any class ingress object is modified leading to connection breaks with all the clients. Multiple ingress-nginx controllers can be set up in a cluster categorising classes (eg: for sensitive applications with wss) to avoid unnecessary connection re-establishments. [Note: electionID and ingress-class parameters must be cautiously configured to be mutually exclusive among the controllers]
  2. AWS ALB ingress controller creates a dedicated Application LoadBalancer with Target Groups on AWS cloud. Traffic is not proxied through the controller; backends are registered based on target-type annotation: Instance or IP, to include all worker nodes of the cluster or Pod IPs from CNI respectively. With IngressGroup specification (referred by spec.group or legacy annotation alb.ingress.kubernetes.io/group.name) multiple Ingress resources spanning across namespaces can share a common ALB. The controller configures the ALB listeners based on the grouped ingress’s routing rules. This solution is native to AWS supporting firewalling with security groups, DDOS protection, metrics etc.
  3. Voyager is an HAProxy-backed ingress controller that works in layers 7 and 4 of the OSI model. It creates LoadBalancers supporting ports other than 80/443 and protocols other than HTTP/HTTPS which can be helpful in many scenarios like jenkins controller-executor setup, elasticsearch, kafka, smtp etc.

Nice to haves

  • Delete-protection settings can be enabled on the LoadBalancers to prevent accidental deletions.
  • Ingresses can terminate SSL/TLS, most of the ingress controllers work with cert-manager to generate certificates for the hostnames in the ingress using the annotation cert-manager.io/cluster-issuer.
  • external-dns can handle auto-updating DNS records for hostnames mapping to the LoadBalancer.

Edge clusters on Bare Metals

MetalLB is used along with the ingress-controllers in non-cloud Kubernetes clusters to simulate network LoadBalancer behaviour. MetalLB controller is allocated a predefined set of IPs, each ingress controller’s LoadBalancer type Service status is updated with one IP from that range. It is then announced to the network by the metalLB speaker daemonset (through ARP) that the IP is assigned to a random node in the cluster, this helps Gateway route traffic to the imaginary IP’s host from other networks.

Zero Trust Network Access

As a VPN replacement, Cloudflare Teams (with Warp client on employee devices) enables secure access to private corporate resources. Cloudflared service runs in the private network, setting up a reverse tunnel to Cloudflare Edge. This allows access to specified internal web applications at a public domain <infra.ztna> (with <connector-uuid>.cfargotunnel.com CNAME that is implicitly proxied to <infra.internal>). All the traffic from the devices are routed through Cloudflare Edge, <*.infra.ztna> URLs are challenged with identity policies defined (example: SSO integration, device posture like serial number) and then served via the bridging cloudflared instance.

Want to work with these exciting technologies? Take a look at Smarkets’ current vacancies here.

--

--