Skip to content

OCP 4

OpenShift Container Platform 4 (OCP 4)

OpenShift Platform Plus

Best Practices

Setting up OCP4 on AWS

ROSA Red Hat OpenShift Service on AWS

OCP 4 Architecture

Downloads

OpenShift End-to-End. Day 0, Day 1 & Day 2

OCP 4 Pland and Deploy

OCP 4 Overview

tenant

Three New Functionalities

  1. Self-Managing Platform
  2. Application Lifecycle Management (OLM):
    • OLM Operator:
      • Responsible for deploying applications defined by ClusterServiceVersion (CSV) manifest.
      • Not concerned with the creation of the required resources; users can choose to manually create these resources using the CLI, or users can choose to create these resources using the Catalog Operator.
    • Catalog Operator:
      • Responsible for resolving and installing CSVs and the required resources they specify. It is also responsible for watching CatalogSources for updates to packages in channels and upgrading them (optionally automatically) to the latest available versions.
      • A user that wishes to track a package in a channel creates a Subscription resource configuring the desired package, channel, and the CatalogSource from which to pull updates. When updates are found, an appropriate InstallPlan is written into the namespace on behalf of the user.
  3. Automated Infrastructure Management (Over-The-Air Updates)

ocp update1 ocp update2 ocp update3

New Technical Components

  • New Installer:
  • Storage: Cloud integrated storage capability used by default via OCS Operator (Red Hat)
  • Operators End-To-End!: responsible for reconciling the system to the desired state
    • Cluster configuration kept as API objects that ease its maintenance (“everything-as-code” approach):
      • Every component is configured with Custom Resources (CR) that are processed by operators.
      • No more painful upgrades and synchronization among multiple nodes and no more configuration drift.
    • List of operators that configure cluster components (API objects):
      • API server
      • Nodes via Machine API
      • Ingress
      • Internal DNS
      • Logging (EFK) and Monitoring (Prometheus)
      • Sample applications
      • Networking
      • Internal Registry
      • Oauth (and authentication in general)
      • etc
  • At the Node Level:
    • RHEL CoreOS is the result of merging CoreOS Container Linux and RedHat Atomic host functionality and is currently the only supported OS to host OpenShift 4.
    • Node provisioning with ignition, which came with CoreOS Container Linux
    • Atomic host updates with rpm-ostree
    • CRI-O as a container runtime
    • SELinux enabled by default
  • Machine API: Provisioning of nodes. Abstraction mechanism added (API objects to declaratively manage the cluster):
    • Based on Kubernetes Cluster API project
    • Provides a new set of machine resources:
      • Machine
      • Machine Deployment
      • MachineSet:
        • distributes easily your nodes among different Availability Zones
        • manages multiple node pools (e.g. pool for testing, pool for machine learning with GPU attached, etc)
  • Everything “just another pod”

Installation & Cluster Autoscaler

  • New installer openshift-install tool, replacement for the old Ansible scripts.
  • 40 min (AWS). Terraform.
  • 2 installation patterns:
    1. Installer Provisioned Infrastructure (IPI)
    2. User Provisioned Infrastructure (UPI)
  • The whole process can be done in one command and requires minimal infrastructure knowledge (IPI): openshift-install create cluster

OCP IPI

OCP IPI UPI


IPI & UPI

  • 2 installation patterns:
    1. Installer Provisioned Infrastructure (IPI): On supported platforms, the installer is capable of provisioning the underlying infrastructure for the cluster. The installer programmatically creates all portions of the networking, machines, and operating systems required to support the cluster. Think of it as best-practice reference architecture implemented in code.  It is recommended that most users make use of this functionality to avoid having to provision their own infrastructure.  The installer will create and destroy the infrastructure components it needs to be successful over the life of the cluster.
    2. User Provisioned Infrastructure (UPI): For other platforms or in scenarios where installer provisioned infrastructure would be incompatible, the installer can stop short of creating the infrastructure, and allow the platform administrator to provision their own using the cluster assets generated by the install tool. Once the infrastructure has been created, OpenShift 4 is installed, maintaining its ability to support automated operations and over-the-air platform updates.

OCP IPI2

OCP UPI


Cluster Autoscaler Operator

  • Adjusts the size of an OpenShift Container Platform cluster to meet its current deployment needs. It uses declarative, Kubernetes-style arguments
  • Increases the size of the cluster when there are pods that failed to schedule on any of the current nodes due to insufficient resources or when another node is necessary to meet deployment needs. The ClusterAutoscaler does not increase the cluster resources beyond the limits that you specify.
  • A huge improvement over the manual, error-prone process used in the previous version of OpenShift and RHEL nodes.

OCP Autoscaler1 OCP Autoscaler2

Operators

Introduction

  • Core of the platform
  • The hierarchy of operators, with clusterversion at the top, is the single door for configuration changes and is responsible for reconciling the system to the desired state.
  • For example, if you break a critical cluster resource directly, the system automatically recovers itself. 
  • Similarly to cluster maintenance, operator framework used for applications. As a user, you get SDK, OLM (Lifecycle Manager of all Operators and their associated services running across their clusters) and embedded operator hub.
  • OLM Arquitecture
  • Adding Operators to a Cluster (They can be added via CatalogSource)
  • The supported method of using Helm charts with Openshift is via the Helm Operator
  • twitter.com/operatorhubio
  • View the list of Operators available to the cluster from the OperatorHub:
$ oc get packagemanifests -n openshift-marketplace 
NAME AGE 
amq-streams 14h 
packageserver 15h 
couchbase-enterprise 14h 
mongodb-enterprise 14h 
etcd 14h myoperator 14h 
...

OCP Operators

Catalog

  • Developer Catalog
  • Installed Operators
  • OperatorHub (OLM)
  • Operator Management:
    • Operator Catalogs are groups of Operators you can make available on the cluster. They can be added via CatalogSource (i.e. “catalogsource.yaml”). Subscribe and grant a namespace access to use the installed Operators.
    • Operator Subscriptions keep your services up to date by tracking a channel in a package. The approval strategy determines either manual or automatic updates.

Operator Subscriptions

Certified Opeators, OLM Operators and Red Hat Operators

  • Certified Operators packaged by Certified:
    • Not provided by Red Hat
    • Supported by Red Hat
    • Deployed via “Package Server” OLM Operator
  • OLM Operators:
    • Packaged by Red Hat
    • “Package Server” OLM Operator includes a CatalogSource provided by Red Hat
  • Red Hat Operators:
    • Packaged by Red Hat
    • Deployed via “Package Server” OLM Operator
  • Community Edition Operators:
    • Deployed by any means
    • Not supported by Red Hat

OCP Certified Operators

Deploy and bind enterprise-grade microservices with Kubernetes Operators

OpenShift Container Storage Operator (OCS)

OCS 3 (OpenShift 3)
  • OpenShift Container Storage based on GlusterFS technology.
  • Not OpenShift 4 compliant: Migration tooling will be available to facilitate the move to OCS 4.x (OpenShift Gluster APP Mitration Tool).
OCS 4 (OpenShift 4)
  • OCS Operator based on Rook.io with Operator LifeCycle Manager (OLM).
  • Tech Stack:
    • Rook (don’t confuse this with non-redhat “Rook Ceph” -> RH ref).
      • Replaces Heketi (OpenShift 3)
      • Uses Red Hat Ceph Storage and Noobaa.
    • Red Hat Ceph Storage
    • Noobaa:
      • Red Hat Multi Cloud Gateway (AWS, Azure, GCP, etc)
      • Asynchronous replication of data between my local ceph and my cloud provider
      • Deduplication
      • Compression
      • Encryption
  • Backups available in OpenShift 4.2+ (Snapshots + Restore of Volumes)
  • OCS Dashboard in OCS Operator

OCS Dashboard

Cluster Network Operator (CNO) & Routers

oc describe clusteroperators/ingress
oc logs --namespace=openshift-ingress-operator deployments/ingress-operator

ServiceMesh Operator

OCS Servicemesh 1 OCS Servicemesh 2 OCS Servicemesh 3

OCS Servicemesh 4


Serverless Operator (Knative)

Crossplane Operator (Universal Control Plane API for Cloud Computing)

Monitoring & Observability

Grafana

  • Integrated Grafana v5.4.3 (deployed by default):
  • Monitoring -> Dashboards
  • Project “openshift-monitoring”
  • grafana.com/docs/v5.4/

Prometheus

Alerts & Silences

  • Integrated Alertmanager 0.16.2 (deployed by default):
    • Monitoring -> Alerts
    • Monitoring -> Silences
    • Silences temporarily mute alerts based on a set of conditions that you define. Notifications are not sent for alerts that meet the given conditions.
  • Project “openshift-monitoring”
  • prometheus.io/docs/alerting/alertmanager/

Cluster Logging (EFK)

  • thenewstack.io: Log Management for Red Hat OpenShift
  • EFK: Elasticsearch + Fluentd + Kibana
  • Cluster Logging EFK not deployed by default
  • As an OpenShift Container Platform cluster administrator, you can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services.
  • The OpenShift Container Platform cluster logging solution requires that you install both the Cluster Logging Operator and Elasticsearch Operator. There is no use case in OpenShift Container Platform for installing the operators individually. You must install the Elasticsearch Operator using the CLI following the directions below. You can install the Cluster Logging Operator using the web console or CLI. Deployment procedure based on CLI + web console:
OCP Release Elasticsearch Fluentd Kibana EFK deployed by default
OpenShift 3.11 5.6.13.6 0.12.43 5.6.13 No
OpenShift 4.1 5.6.16 ? 5.6.16 No


Build Images. Next-Generation Container Image Building Tools

  • Redesign of how images are built on the platform.
  • Instead of relying on a daemon on the host to manage containers, image creation, and image pushing, we are leveraging Buildah running inside our build pods.
  • This aligns with the general OpenShift 4 theme of making everything “just another pod”
  • A simplified set of build workflows, not dependent on the node host having a specific container runtime available. 
  • Dockerfiles that built under OpenShift 3.x will continue to build under OpenShift 4.x and S2I builds will continue to function as well.
  • The actual BuildConfig API is unchanged, so a BuildConfig from a v3.x cluster can be imported into a v4.x cluster and work without modification.
  • Podman & Buildah for docker users
  • Openshift ImageStreams
  • Openshift 4 image builds
  • Custom image builds with Buildah
  • Rootless podman and NFS

Buildah

Registry & Quay

  • A Docker registry is a place to store and distribute Docker images.
  • It serves as a target for your docker push and docker pull commands.
  • Openshift ImageStreams
  • The registry is now managed by an Operator instead of oc adm registry.
  • Quay.io is a hosted Docker registry from CoreOS:
    • Main features:
      • “Powerful build triggers”
      • “Advanced team permissions”
      • “Secure storage”
    • One of the more enterprise-friendly options out there, offering fine-grained permission controls.
    • They support any git server and let you build advanced workflows by doing things like mapping git branches to Docker tags so that when you commit code it automatically builds a corresponding image.
    • Quay offers unlimited free public repositories. Otherwise, you pay by the number of private repositories. There’s no extra charge for storage or bandwidth.
  • Quay 3.0 released in May 2019: support for multiple architectures, Windows containers, and a Red Hat Enterprise Linux (RHEL)-based image to this container image registry.
  • Quay 3.1 released in September 2019: The newest Quay feature is repository mirroring, which complements our existing geographic replication features. Repository mirroring reflects content between distinct, different registries. With this, you can synchronize whitelisted repositories or a source registry subset into Quay. This makes it much easier to distribute images and related data through Quay.
  • Quay Community Edition operator
  • Quay 3.1 Certified Operator is not available in Openshift and must be purchased
  • Open Source ProjectQuay.io Container Registry:

Local Development Environment

  • For version 3 we have Container Development Kit (or its open source equivalent for OKD - minishift) which launches a single node VM with Openshift and it does it in a few minutes. It’s perfect for testing also as a part of CI/CD pipeline.
  • Openshift 4 on your laptop: There is a working solution for single node OpenShift cluster. It is provided by a new project called CodeReady Containers.
  • Procedure:
untar
crc setup
crc start
environment variables
oc login

OpenShift on Azure

OpenShift Youtube

OpenShift 4 Training

OpenShift 4 Roadmap

Kubevirt Virtual Machine Management on Kubernetes

Networking and Network Policy in OCP4. SDN/CNI plug-ins

ocp4 cni arch

Multiple Networks with SDN/CNI plug-ins. Usage scenarios for an additional network

  • Understanding multiple networks In Kubernetes, container networking is delegated to networking plug-ins that implement the Container Network Interface (CNI). OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. During cluster installation, you configure your default Pod network. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your Pods. You can define more than one additional network for your cluster, depending on your needs. This gives you flexibility when you configure Pods that deliver network functionality, such as switching or routing.
  • You can use an additional network in situations where network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for the following performance and security reasons:
    • Performance: You can send traffic on two different planes in order to manage how much traffic is along each plane.
    • Security: You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers.
  • All of the Pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every Pod has an eth0 interface that is attached to the cluster-wide Pod network. You can view the interfaces for a Pod by using the oc exec -it – ip a command. If you add additional network interfaces that use Multus CNI, they are named net1, net2, …​, netN.
  • To attach additional network interfaces to a Pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a Custom Resource (CR) that has a NetworkAttachmentDefinition type. A CNI configuration inside each of these CRs defines how that interface is created.
  • openshift.com: Demystifying Multus 🌟

Istio CNI plug-in

  • Istio CNI plug-in 🌟 Red Hat OpenShift Service Mesh includes CNI plug-in, which provides you with an alternate way to configure application pod networking. The CNI plug-in replaces the init-container network configuration eliminating the need to grant service accounts and projects access to Security Context Constraints (SCCs) with elevated privileges.

Calico CNI Plug-in

Third Party Network Operators with OpenShift

Storage in OCP 4. OpenShift Container Storage (OCS)

Red Hat Advanced Cluster Management for Kubernetes

OpenShift Kubernetes Engine (OKE)

openshift4 architecture

Red Hat CodeReady Containers. OpenShift 4 on your laptop

OpenShift Hive: Cluster-as-a-Service. Easily provision new PaaS environments for developers

OpenShift 4 Master API Protection in Public Cloud

Backup and Migrate to OpenShift 4

OKD4. OpenShift 4 without enterprise-level support

OpenShift Serverless with Knative

Helm Charts and OpenShift 4

Red Hat Marketplace

Kubestone. Benchmarking Operator for K8s and OpenShift

OpenShift Cost Management

Operators in OCP 4

Quay Container Registry

Application Migration Toolkit

Developer Sandbox

OpenShift Topology View

OpenBuilt Platform for the Construction Industry

Slides