Goglides Dev 🌱

Balkrishna Pandey
Balkrishna Pandey

Posted on

Openshift-UPI: What is it and how it works?

For the readers who are only familiar with Kubernetes, Openshift is a new topic. Before we start going into details of the OpenShift UPI installation method, it's better to understand what Openshift is.
OpenShift is a cloud development Platform as a Service (PaaS) developed by Red Hat. It is an open-source development platform that enables developers to build and deploy their applications on cloud infrastructure. OpenShift provides predefined application environments, and builds upon Kubernetes emphasis on Kubernetes to provide support for DevOps practices. As per the company, it is the next generation of application platforms.
OpenShift can be deployed in three ways:

  • Openshift Online
  • Openshift Dedicated
  • Openshift Container Platform

The significant difference between these three types is that Openshift Online is a cloud-based development platform. In contrast, Openshift Dedicated (Only available on AWS and GCP) and Container Platform are both on-premise offerings. Each of these platforms provides varying levels of control, flexibility, and scalability depending on your needs.
If you are looking to develop cloud applications, Openshift Online is a great way to get started quickly without worrying about managing any infrastructure or servers. On the other hand, if you prefer to control the infrastructure, then Openshift Dedicated or OpenShift Container Platform are great options.
In this tutorial, we explore the Openshift Container Platform (OCP). You can stand up OCP clusters in your infrastructure based on your need, but broadly, you can divide them into three categories.

  • User Provisioned Infrastructure (UPI): The user creates the infrastructure

  • Installer Provisioned Infrastructure (IPI): Installer create the infrastructure

  • Assisted Installer (AI): Designed primarily for bare-metal installations, AI is also widely used in the Zero Touch Provisioning (ZTP) process, which combines RedHat Advanced Cluster Management (RHACM) with git workflow.

What is OpenShift UPI (User Provisioned Infrastructure)?

Since this POC is based on OCP UPI, let's understand what is OpenShift UPI deployment and how it works in-depth?
OpenShift UPI is a deployment method that allows users to provision their infrastructure and then use it to deploy OpenShift. The administrator can create and manage OpenShift nodes using the pre-existing install process. When using this method, the Ignition configs must be provided to the nodes so that they may configure and join the cluster.

Note: Ignition configs is a configuration management tool used on a node to change the system's state. OpenShift uses it as a mechanism for deployment and host configuration.

Pre-requisite to setup Openshift UPI Cluster

There are some pre-requisite to set up UPI Cluster. We will remove some of these pre-requisites, such as a Compute machine, some DNS entry, and the need for external load balancer instances for our POC.
But first, let's evaluate the existing official requirements, why we need this, and what component we will use to remove these dependencies.

One temporary bootstrap machine

The cluster needs the bootstrap machine to be deployed on three control plane machines. You can remove the bootstrap machine after the cluster is installed. This is required in IPI and UPI both and for our POC.

Note: In "Assisted Installer (AI)," removes the need for an extra machine by randomly selecting one of the nodes as the bootstrap machine. In AI deployment, the Installer will remove the temporary control plane and promote the bootstrap machine to the actual control plane after the successful installation.

Three control plane machines

Three control plane machines run the Kubernetes and OpenShift Container Platform services. These services help manage and control the containers on the platform. We will also use these machines to deploy the workload in our hyper-converged POC. So all of the controller nodes also become workers to schedule the user workload.

At least two compute machines/worker nodes

The workloads users request to run on the OpenShift Container Platform running on the Compute machines. We will remove these machines for our hyper-converged deployment.

External DNS Requirement

As per official requirements in OCP UPI deployments, DNS name resolution is required for the following components. We will remove some of these DNS requirements in our HyperConverged POC.

  • DNS Entry for API (External access): We have to create a DNS entry (DNS A/AAAA or CNAME and a DNS PTR record) with the pattern api.<cluster_name>.<base_domain> to access the OpenShift API externally. You should be able to resolve this record from all the nodes within the cluster and clients external to the cluster. Technically for our POC, since we are using Coredns running inside the cluster, we don't need this record to create the cluster. But for external access, we need this record set.

  • DNS Entry for API (Internal access within the cluster): We have to create a DNS entry (DNS A/AAAA or CNAME and a DNS PTR record) with pattern api-int.<cluster_name>.<base_domain> to access the OpenShift API internally. The records must be resolvable from all the nodes within the cluster. Kubelet and other components use this endpoint to interact with OpenShift API.
    Machine config server is also running at port 22623 on the bootstrap machine and all control plane machines. All control plane machines try to get this machine config state from the bootstrap machine using this record during the cluster initialization process. Because this occurs during machine startup, the machine will be unable to resolve the domain name using internal Coredns. As a result, we'll also require this external DNS record for our POC.

Note: Assisted Installer (AI) uses a Keepalived VIP address to remove this dependency. We can technically use the same process; we have to generate the certificates for that VIP address and update the configuration to use the IP address instead of the DNS name.

  • Wildcard DNS entry for application(routes): We have to create A wildcard DNS (A/AAAA or CNAME) record to point to the application ingress load balancer with the pattern *.apps.<cluster_name>.<base_domain>. This is how you can access application running inside the cluster. The application ingress load balancer will send traffic to the machines that run the Ingress Controller pods based on the route rules. The records must be resolvable from all the nodes within the cluster; otherwise, the OAuth service will fail, and the application trying to use this service will fail. And the records must be resolvable from clients external to the cluster. To function cluster in our POC, we don't need to create this recordset externally; we will use cluster level CoreDNS to remove this dependency but to access the web application from an outside cluster, we need this.

  • A DNS record for bootstrap machine: We have to create a DNS record (A/AAAA or CNAME record, and a DNS PTR) with the pattern bootstrap.<cluster_name>.<base_domain>. These records must be resolvable by the controller nodes within the cluster. We don't need this; we will use cluster-level CoreDNS to remove this dependency.

  • A DNS entry for the control plane: We have to create a DNS record (A/AAAA or CNAME record, and a DNS PTR) with the pattern <master><n>.<cluster_name>.<base_domain>. These records must be resolvable by the nodes within the cluster. We don't need this; we will use cluster-level CoreDNS to remove this dependency.

  • A DNS entry for Comput machine: We have to create DNS records (A/AAAA or CNAME record, and a DNS PTR) with pattern <worker><n>.<cluster_name>.<base_domain>. These records must be resolvable by the nodes within the cluster. We don't have any extra workers for our POC, so we don't need this.
    The Kubernetes API, bootstrap, control plane, and compute machines require reverse DNS resolution and DNS name resolution. Set up A/AAAA or CNAME records for name resolution and PTR records for reverse name resolution. Reverse records are essential since they are used by Red Hat Enterprise Linux CoreOS (RHCOS) to assign hostnames to all nodes. Certificate signing requests are generated using reverse records (CSRs). The OCP cannot function without these CSRs.

Load-balancing requirements

Load balancer makes sure you have highly available OpenShift, so you can request the cluster even if some controllers aren't responding. For UPI Cluster, the external load balancer is an absolute requirement. For this POC, we'll use kubelet-managed HAProxy, Keepalived, and Coredns inside the same cluster to get rid of the external load balancer.
Loadbalancer is required for two components,

  • API load balancer: This provides a common endpoint for users, both and machines, to interact with and configure the platform. It is required for Internal API endpoints and External API endpoints.

  • Application ingress load balancer: Provides an ingress point for application traffic entering the cluster from outside.
    We have the basic knowledge of Openshift and the pre-requisite for the UPI cluster. Let's move to the next topic that supports Ignition Configs' generation.

References:

Official Doc

Top comments (0)