Goglides Dev 🌱

Cover image for Openshift-UPI: Remove external load-balancer dependencies
Balkrishna Pandey
Balkrishna Pandey

Posted on • Updated on

Openshift-UPI: Remove external load-balancer dependencies

I shouldn't be writing this blog for Goglides DEV, given the age of this community and the people who are now following me (since most of my subscribers are from TikTok and newbies searching for assistance on "How to Start - IT Career" and "Basics on Kubernetes and DevOps"). I am, however, adding some amounts of minor facts to each topic, hoping that this may provide some context; if not, please disregard this blog for the time being.

Remove External Loadbalancer Dependencies


Recently, I've been involved in first-of-a-kind POC projects to put up an Openshift cluster, with the POC aim being to meet the following requirements:

  • Cluster should be created using the OpenShift UPI (User Provisioned Infrastructure) method (something to do with legacy CI/CD and how the project is customized, so we are trying to fit into the existing setup without making any changes to the pipeline).

  • Be able to build a three-node hyper-converged openshift cluster where
    a. Each of the three nodes serves as a Worker and an Openshift Controller.
    b. To prevent external load balancer dependencies, all three nodes should also operate as load balancers and DNS resolvers.

  • Should be able to use the existing pipeline where we use an ansible playbook to generate the custom ignition file.

Solution Purpose

The first criteria are simple to fulfill; we need to alter the Openshift settings to make the master schedulable. There isn't much to do because it's a supported setup.

Removing the external load balancer and relocating those components to the inside cluster, on the other hand, is challenging. For UPI-based openshift installation, there are no officially supported methods. How do we address this problem?

  • We deployed HAProxy, Keepalived, and Coredns in the same cluster to meet this need.
  • We created custom ignition files with HAProxy, Keepalived, and CoreDNS, included
  • We used Kubelet static Pod to manage these components.

Now we know the basics objectives and a higher-level overview of how we implemented the solutions. I want to divide this overall blog into a couple of different topics,

  • I'd want to present Openshift Overview for beginners in a different blog in this series, which will be helpful for individuals who are familiar with vanilla Kubernetes but want to learn more about Openshift. I will cover the pre-requisite for the Openshift UPI cluster and what changes we will make to those components. I'll also go through the basics of other available installation options in this blog. So, if you're already knowledgeable on the subject, you may skip this section.

  • After that, we'll show you how to use a new tool called filetranspiler to create custom ignition files. You can skip this if you have a different method for generating the custom ignition file.

  • After that, we'll talk about Keepalived, including what it is, what problem it is trying to solve, how it works, and how we're utilizing it to solve our challenges.

  • The next topic will cover HAProxy and how we use it.

  • Following that, we'll go over CoreDNS and how we're utilizing it to resolve DNS records within the cluster for node-to-node communication.

  • After that, we'll go through the basics of kubelet static pods and how they function. And how we're utilizing it to handle components like Keepalived, CoreDNS, and HaProxy.

  • Finally, we'll create a HyperConverged Openshift Cluster using a bash script to automate portions of the process. While I don't have enough hardware to illustrate this on bare-metal, I'll construct virtual machines in the OpenStack environment and utilize them as bare-metal servers. So it's a simulated bare-metal deployment scenario, but you can use it for any use case.

Now that we have the basics let's dig into the details on each topic.

Top comments (0)