Authors : Kubernetes 1.26 Release Team
It's with immense joy that we announce the release of Kubernetes v1.26!
This release includes a total of 37 enhancements: eleven of them are graduating to Stable, ten are graduating to Beta, and sixteen of them are entering Alpha. We also have twelve features being deprecated or removed, three of which we better detail in this announcement.
Release theme and logo
Kubernetes 1.26: Electrifying
The theme for Kubernetes v1.26 is Electrifying.
Each Kubernetes release is the result of the coordinated effort of dedicated volunteers, and only made possible due to the use of a diverse and complex set of computing resources, spread out through multiple datacenters and regions worldwide. The end result of a release - the binaries, the image containers, the documentation - are then deployed on a growing number of personal, on-premises, and cloud computing resources.
In this release we want to recognise the importance of all these building blocks on which Kubernetes is developed and used, while at the same time raising awareness on the importance of taking the energy consumption footprint into account: environmental sustainability is an inescapable concern of creators and users of any software solution, and the environmental footprint of sofware, like Kubernetes, an area which we believe will play a significant role in future releases.
As a community, we always work to make each new release process better than before (in this release, we have started to use Projects for tracking enhancements, for example). If v1.24 "Stargazer" had us looking upwards, to what is possible when our community comes together, and v1.25 "Combiner" what the combined efforts of our community are capable of, this v1.26 "Electrifying" is also dedicated to all of those whose individual motion, integrated into the release flow, made all of this possible.
Major themes
Kubernetes v1.26 is composed of many changes, brought to you by a worldwide team of volunteers. For this release, we have identified several major themes.
Change in container image registry
In the previous release, Kubernetes changed the container registry, allowing the spread of the load across multiple Cloud Providers and Regions, a change that reduced the reliance on a single entity and provided a faster download experience for a large number of users.
This release of Kubernetes is the first that is exclusively published in the new registry.k8s.io
container image registry. In the (now legacy) k8s.gcr.io
image registry, no container images tags for v1.26 will be published, and only tags from releases before v1.26 will continue to be updated. Refer to registry.k8s.io: faster, cheaper and Generally Available for more information on the motivation, advantages, and implications of this significant change.
CRI v1alpha2 removed
With the adoption of the Container Runtime Interface (CRI) and the removal of dockershim in v1.24, the CRI is the only supported and documented way through which Kubernetes interacts with different container runtimes. Each kubelet negotiates which version of CRI to use with the container runtime on that node.
In the previous release, the Kubernetes project recommended using CRI version v1
, but kubelet could still negotiate the use of CRI v1alpha2
, which was deprecated.
Kubernetes v1.26 drops support for CRI v1alpha2
. Thatremoval will result in the kubelet not registering the node if the container runtime doesn't support CRI v1
. This means that containerd minor version 1.5 and older are not supported in Kubernetes 1.26; if you use containerd, you will need to upgrade to containerd version 1.6.0 or later before you upgrade that node to Kubernetes v1.26. This applies equally to any other container runtimes that only support the v1alpha2
: if that affects you, you should contact the container runtime vendor for advice or check their website for additional instructions in how to move forward.
Storage improvements
Following the GA of the core Container Storage Interface (CSI) Migrationfeature in the previous release, CSI migration is an on-going effort that we've been working on for a few releases now, and this release continues to add (and remove) features aligned with the migration's goals, as well as other improvements to Kubernetes storage.
CSI migration for Azure File and vSphere graduated to stable
Both the vSphere andAzure in-tree driver migration to CSI have graduated to Stable. You can find more information about them in the vSphere CSI driver and Azure File CSI driver repositories.
Delegate FSGroup to CSI Driver graduated to stable
This feature allows Kubernetes to supply the pod's fsGroup
to the CSI driver when a volume is mounted so that the driver can utilize mount options to control volume permissions. Previously, the kubelet would always apply thefsGroup
ownership and permission change to files in the volume according to the policy specified in the Pod's .spec.securityContext.fsGroupChangePolicy
field. Starting with this release, CSI drivers have the option to apply the fsGroup
settings during attach or mount time of the volumes.
In-tree GlusterFS driver removal
Already deprecated in the v1.25 release, the in-tree GlusterFS driver wasremoved in this release.
In-tree OpenStack Cinder driver removal
This release removed the deprecated in-tree storage integration for OpenStack (the cinder
volume type). You should migrate to external cloud provider and CSI driver fromhttps://github.com/kubernetes/cloud-provider-openstack instead. For more information, visit Cinder in-tree to CSI driver migration.
Signing Kubernetes release artifacts graduates to beta
Introduced in Kubernetes v1.24, this feature constitutes a significant milestone in improving the security of the Kubernetes release process. All release artifacts are signed keyless using cosign, and both binary artifacts and imagescan be verified.
Support for Windows privileged containers graduates to stable
Privileged container support allows containers to run with similar access to the host as processes that run on the host directly. Support for this feature in Windows nodes, called HostProcess containers, will now graduate to Stable, enabling access to host resources (including network resources) from privileged containers.
Improvements to Kubernetes metrics
This release has several noteworthy improvements on metrics.
Metrics framework extension graduates to alpha
The metrics framework extension graduates to Alpha, anddocumentation is now published for every metric in the Kubernetes codebase.This enhancement adds two additional metadata fields to Kubernetes metrics: Internal
and Beta
, representing different stages of metric maturity.
Component Health Service Level Indicators graduates to alpha
Also improving on the ability to consume Kubernetes metrics, component health Service Level Indicators (SLIs) have graduated to Alpha: by enabling the ComponentSLIs
feature flag there will be an additional metrics endpoint which allows the calculation of Service Level Objectives (SLOs) from raw healthcheck data converted into metric format.
Feature metrics are now available
Feature metrics are now available for each Kubernetes component, making it possible to track whether each active feature gate is enabledby checking the component's metric endpoint for kubernetes_feature_enabled
.
Dynamic Resource Allocation graduates to alpha
Dynamic Resource Allocationis a new featurethat puts resource scheduling in the hands of third-party developers: it offers an alternative to the limited "countable" interface for requesting access to resources (e.g. nvidia.com/gpu: 2
), providing an API more akin to that of persistent volumes. Under the hood, it uses the Container Device Interface (CDI) to do its device injection. This feature is blocked by the DynamicResourceAllocation
feature gate.
CEL in Admission Control graduates to alpha
This feature introduces a v1alpha1
API for validating admission policies, enabling extensible admission control via Common Expression Language expressions. Currently, custom policies are enforced via admission webhooks, which, while flexible, have a few drawbacks when compared to in-process policy enforcement. To use, enable the ValidatingAdmissionPolicy
feature gate and the admissionregistration.k8s.io/v1alpha1
API via --runtime-config
.
Pod scheduling improvements
Kubernetes v1.26 introduces some relevant enhancements to the ability to better control scheduling behavior.
PodSchedulingReadiness
graduates to alpha
This feature introduces a .spec.schedulingGates
field to Pod's API, to indicate whether the Pod is allowed to be scheduled or not. External users/controllers can use this field to hold a Pod from scheduling based on their policies and needs.
NodeInclusionPolicyInPodTopologySpread
graduates to beta
By specifying a nodeInclusionPolicy
in topologySpreadConstraints
, you can control whether totake taints/tolerations into considerationwhen calculating Pod Topology Spread skew.
Other Updates
Graduations to stable
This release includes a total of eleven enhancements promoted to Stable:
- Support for Windows privileged containers
- vSphere in-tree to CSI driver migration
- Allow Kubernetes to supply pod's fsgroup to CSI driver on mount
- Azure file in-tree to CSI driver migration
- Job tracking without lingering Pods
- Service Internal Traffic Policy
- Kubelet Credential Provider
- Support of mixed protocols in Services with type=LoadBalancer
- Reserve Service IP Ranges For Dynamic and Static IP Allocation
- CPUManager
- DeviceManager
Deprecations and removals
12 features were deprecated or removed from Kubernetes with this release.
- CRI
v1alpha2
API is removed - Removal of the
v1beta1
flow control API group - Removal of the
v2beta2
HorizontalPodAutoscaler API - GlusterFS plugin removed from available in-tree drivers
- Removal of legacy command line arguments relating to logging
- Removal of
kube-proxy
userspace modes - Removal of in-tree credential management code
- The in-tree OpenStack cloud provider is removed
- Removal of dynamic kubelet configuration
- Deprecation of non-inclusive
kubectl
flag - Deprecations for
kube-apiserver
command line arguments - Deprecations for
kubectl run
command line arguments
Release notes
The complete details of the Kubernetes v1.26 release are available in our release notes.
Availability
Kubernetes v1.26 is available for download on the Kubernetes site. To get started with Kubernetes, check out these interactive tutorials or run local Kubernetes clusters using containers as "nodes", with kind. You can also easily install v1.26 using kubeadm.
Release team
Kubernetes is only possible with the support, commitment, and hard work of its community. Each release team is made up of dedicated community volunteers who work together to build the many pieces that make up the Kubernetes releases you rely on. This requires the specialized skills of people from all corners of our community, from the code itself to its documentation and project management.
We would like to thank the entire release teamfor the hours spent hard at work to ensure we deliver a solid Kubernetes v1.26 release for our community.
A very special thanks is in order for our Release Lead, Leonard Pahlke, for successfully steering the entire release team throughout the entire release cycle, by making sure that we could all contribute in the best way possible to this release through his constant support and attention to the many and diverse details that make up the path to a successful release.
User highlights
- Wortell faced increasingly higher ammounts of developer expertise and time for daily infrastructure management. They used Dapr to reduce the complexity and amount of required infrastructure-related code, allowing them to focus more time on new features.
- Utmost handles sensitive personal data and needed SOC 2 Type II attestation, ISO 27001 certification, and zero trust networking. Using Cilium, they created automated pipelines that allowed developers to create new policies, supporting over 4,000 flows per second.
- Global cybersecurity company Ericomβs solutions depend on hyper-low latency and data security. With Ridge's managed Kubernetes service they were able to deploy, through a single API, to a network of service providers worldwide.
- Lunar, a Scandinavian online bank, wanted to implement quarterly production cluster failover testing to prepare for disaster recovery, and needed a better way to managed their platform services.They started by centralizing their log management system, and followed-up with the centralization of all platform services, using Linkerd to connect the clusters.
- Datadog runs 10s of clusters with 10,000+ of nodes and 100,000+ pods across multiple cloud providers.They turned to Cilium as their CNI and kube-proxy replacement to take advantage of the power of eBPF and provide a consistent networking experience for their users across any cloud.
- Insiel wanted to update their software production methods and introduce a cloud native paradigm in their software production. Their digital transformation project with Kiratech and Microsoft Azure allowed them to develop a cloud-first culture.
Ecosystem updates
- KubeCon + CloudNativeCon Europe 2023 will take place in Amsterdam, The Netherlands, from 17 β 21 April 2023! You can find more information about the conference and registration on the event site.
- CloudNativeSecurityCon North America, a two-day event designed to foster collaboration, discussion and knowledge sharing of cloud native security projects and how to best use these to address security challenges and opportunities, will take place in Seattle, Washington (USA), from 1-2 February 2023. See the event page for more information.
- The CNCF announced the 2022 Community Awards Winners: the Community Awards recognize CNCF community members that are going above and beyond to advance cloud native technology.
Project velocity
The CNCF K8s DevStats project aggregates a number of interesting data points related to the velocity of Kubernetes and various sub-projects. This includes everything from individual contributions to the number of companies that are contributing, and is an illustration of the depth and breadth of effort that goes into evolving this ecosystem.
In the v1.26 release cycle, which ran for 14 weeks(September 5 to December 9), we saw contributions from 976 companies and 6877 individuals.
Upcoming Release Webinar
Join members of the Kubernetes v1.26 release team on Tuesday January 17, 2023 10am - 11am EST (3pm - 4pm UTC) to learn about the major features of this release, as well as deprecations and removals to help plan for upgrades. For more information and registration, visit the event page.
Get Involved
The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests.
Have something youβd like to broadcast to the Kubernetes community? Share your voice at our weeklycommunity meeting, and through the channels below:
- Find out more about contributing to Kubernetes at the Kubernetes Contributors website
- Follow us on Twitter @Kubernetesio for the latest updates
- Join the community discussion on Discuss
- Join the community on Slack
- Post questions (or answer questions) on Server Fault
- Share your Kubernetes story
- Read more about whatβs happening with Kubernetes on the blog
- Learn more about the Kubernetes Release Team
Top comments (0)