Latest changes
vCluster Platform v4.8 + vCluster v0.33 - vMetal: Bare Metal Machine Management and Certified AI Stacks
We’re thrilled to announce vCluster Platform v4.8 and vCluster v0.33. This release extends vCluster's reach in both directions — down the stack with vMetal, our new bare metal provisioning solution, and up the stack with Certified Stacks, pre-built Terraform module sequences that deploy a full environment from virtual cluster to application layer in a single command. Together, they define a single path from raw hardware to a production-ready AI infrastructure environment. vMetal: PXE Booting + Bare Metal Provisioning (Beta) vCluster Platform can now PXE boot bare metal servers and provision them as managed bare metal machines or attach them to Kubernetes clusters as managed nodes. These capabilities ship under vMetal, vCluster's bare metal provisioning solution. Users within a project can request bare metal machines on demand through the platform. Provisioning is powered by Metal3 and Ironic. Configuration requires a layer 2 domain with MAC addresses and Redfish endpoints. Machines are provisioned at project scope with SSH keys and network definitions managed through the platform. Each datacenter can run its own bare metal provider for geographic distribution. To learn more - please look at the docs page. Certified Stacks Certified Stacks enable one-command deployment of a complete environment - virtual cluster, tenancy model, isolation boundaries, resource policies, and application layer. Each stack is a sequence of Terraform modules that accept user input, orchestrate vCluster creation, and install the full application stack end to end. This release ships certified stacks for the following: Run.ai - Hard and soft multi-tenancy, with optional Run.ai control plane deployment for either model Slinky - Hard and soft multi-tenancy Ray - Hard multi-tenancy SkyPilot - Hard multi-tenancy All stacks are published in a public GitHub repo. Each stack is fully self-contained - fork the repo, modify the Terraform modules to fit your environment, swap in your own tooling, or use them as a reference to build a stack from scratch. Certified stacks are tested and maintained by vCluster. Manage vCluster Standalone in vCluster Platform vCluster Standalone instances can now be fully managed from vCluster Platform. Config changes and upgrades no longer require SSH access or manual intervention. New vcluster platform add standalone subcommands allow connecting standalone clusters as a host cluster, a managed cluster, or both. See the full documentation here. Least Privilege Mode The platform agent's ClusterRole can now be scoped to only the permissions your deployment requires. Enable least privilege mode under agentValues.leastPrivilegeMode in your platform config, then disable the features you don't use. All features are enabled by default - you only need to specify what you're turning off. The below example shows all the permissions you may disable for the platform agent: agentValues: leastPrivilegeMode: enabled: true clusterAccess: enabled: false projectQuotas: enabled: false secrets: enabled: false sleepMode: enabled: false Note: Least privilege mode applies to agents deployed on external host clusters, not the agent bundled with the platform itself. If you want to limit agent permissions on a host cluster, deploy the platform on a separate cluster - don't register the platform's own cluster as a host cluster in any project. Learn more at the docs page here. More in This Release Run.ai Conformance - vCluster is now Run.ai conformant. DRA Sync - Dynamic Resource Allocation for GPUs now works in shared tenancy models, enabling fine-grained GPU allocation across virtual clusters. See the docs for more details. All Projects View - Global admins can see all vClusters and namespaces across every project in a single view. Breaking Changes In v0.25 we deprecated k3s as distro option, and as of v0.33 it has been removed entirely. See the docs for more details on migration options. When using extraAccessRules, if a user or team included in the rule does not exist they will now be immediately removed, therefore users & groups must exist within the platform before rules are created. Other Announcements & Changes vCluster’s Istio integration now supports Istio v1.29 For a list of additional fixes and smaller changes, please refer to the vCluster release notes and the vCluster Platform release notes. For detailed documentation and migration guides, visit vcluster.com/docs.
Platform v4.7 & vCluster v0.32 - Resilience
We’re excited to announce vCluster Platform v4.7 and vCluster v0.32, a release focused on resilience, troubleshooting, and day-2 operations for production virtual clusters. From the new vCluster Debug Shell to improved observability guidance and a dedicated Virtual Cluster Status Page, these updates make it easier to diagnose issues, monitor health, and manage the full lifecycle of virtual clusters with confidence. Debug Shell & More The embedded etcd backing store is one of the cornerstones of production-ready virtual clusters. vCluster handles the full lifecycle of etcd as part of it’s control plane. In the last release we’ve added support for this feature to our free plan, for the whole community to benefit. To help you troubleshoot and identify misconfigurations early we’re introducing the vCluster Debug Shell. Attach an ephemeral debug container with helpful commands pre-wired right from the vCluster Platform UI or vCluster CLI and troubleshoot without leaving your browser and looking up which exact certificates etcdctl needs for the 10th time. Alternatively use vcluster debug shell $VCLUSTER_NAME from the CLI. Try it today by upgrading to vCluster Platform 4.7. Read more about it in the docs. Virtual Cluster Status Page Understanding the full lifecycle of any kubernetes cluster can sometimes be challenging. We’re excited to help you in understanding the phases a virtual kubernetes clusters through our dedicated status page in the vCluster Platform UI. See which phase your cluster is in at any point in time, together with all the pods that make up the control plane and conditions surfacing relevant and actionable information. The new status page will serve as the home page to your virtual clusters and we have big plans for it - stay tuned! Breaking Changes Migration of external platform configuration in vcluster.yaml As functionality previously only available from the platform becomes widely available to virtual clusters launched and managed externally, we are updating the configuration to reflect that. This release restructures how platform-specific configuration is organized, relocating most items previously under the external.platform section to the top level: external.platform → platform external.platform.autoSnapshots → snapshots external.platform.autoDelete → deletion Additionally, the top-level sleepMode configuration and external.platform.autoSleep have been merged into the unified sleep field. All sleep related settings, including auto sleep, auto wakeup, and timezone, are now consolidated under sleep. When both were previously configured, sleepMode took precedence, and the migration preserves this behavior. sleepMode.autoSleep & external.platform.autoSleep → sleep.auto sleepMode.autoWakeup → sleep.auto.wakeup sleepMode.timeZone & external.platform.autoSleep.timezone → sleep.auto.timezone A migration helper is available inside the platform. However please note that as part of this change, previous migration logic built into the platform which assisted users moving from v0.24 to later versions have now been removed. Ingress-Nginx Deprecation Update As previously announced, the upstream ingress-nginx project has published plans for its retirement. While most of the functionality remains in the platform, it is marked as deprecated, and will be removed in a future release. In this release automatic ingress authentication has been removed, and we have stopped adding the following annotations by default to the optional platform ingress: nginx.ingress.kubernetes.io/proxy-read-timeout: "43200" nginx.ingress.kubernetes.io/proxy-send-timeout: "43200" nginx.ingress.kubernetes.io/proxy-body-size: "0" Other Announcements & Changes We’ve extended our support for the external database backing store. Now Postgres and MySQL are supported across all major cloud offerings and with a wider range of versions. Check out the full compatibility matrix for more details. We have added support for Kubernetes v1.35, enabling users to take advantage of the latest enhancements, security updates, and performance improvements in the upstream Kubernetes releases. We’re extending our existing monitoring docs to include guidance across all tenancy models. Virtual Clusters with private nodes and standalone require different approaches and are closer to your traditional kubernetes monitoring while maintaining the benefits and unique flexibility of vCluster. We will keep on working on the guides over the next couple of weeks, stay tuned! For a list of additional fixes and smaller changes, please refer to the vCluster release notes and the vCluster Platform release notes. For detailed documentation and migration guides, visit vcluster.com/docs.
Platform v4.6 & vCluster v0.31 - Introducing the Free Tier
vCluster Platform is a powerful centralized control plane which lets teams securely run virtual clusters and workloads at scale. Previously, installing the Platform automatically started a two week trial on the Enterprise Ultimate tier, limiting usage to a fixed window. This latest set of releases introduces a new Free Tier which allows users to continue using the Platform for an unlimited amount of time. After installing the platform you will see an activation window as shown below: Previously licensed versions of the Platform or virtual clusters are unaffected, only new installations will require an activation. Please see our blog (todo link) on the topic, and the pricing page for more detailed plan information on what is included in each tier. vCluster in Docker (vind) We are excited to introduce a new engine for running Kubernetes locally. Starting in v0.31, the vCluster CLI can spin up lightweight clusters in seconds, simplifying both your development workflow and the underlying stack. Compared with other engines it has several advantages, including the ability to start or stop clusters, and auto-proxying images from the local Docker daemon. To create a cluster: vcluster upgrade --version v0.31.0 vcluster use driver docker vcluster create dev Now you have a running cluster, which can use all the usual commands from the CLI such as: vcluster list vcluster connect/disconnect vcluster delete vcluster sleep/wakeup This project is under active development and will get additional enhancements in the near future, including closer integration with other vCluster features. Let us know what you think! Experimental Resource Proxy The new experimental resource proxy feature lets a virtual cluster transparently proxy custom resource requests to another “target” virtual cluster, so users keep working as if the CRDs were local while the actual storage and controllers live elsewhere. This is ideal for centralized resource management and cross-cluster workflows, and it stays safe by default: each client virtual cluster only sees the resources it created, with an optional access mode to expose everything when you need it. Dive into the examples in the docs Security Advisories Along with this release we have published two advisories, the first of which is considered a CVE. Please review the following links for more information and mitigation steps. CVE-2026-22806: Access Keys Allow Access Beyond Scope This update has the potential to disrupt existing access keys, please see the advisory for more information. Advisory: Do not allow non-privileged users to create virtual clusters without a template Other Announcements & Changes The upstream Ingress-nginx project has published plans for its upcoming retirement, and will cease development in March 2026. As of v0.31.0 we are deprecating any ingress-nginx specific use cases or features, such as our chart annotations, or the ability to deploy it via the CLI or platform. Additionally, solutions and examples that feature the ingress-nginx controller project in our docs have been deprecated and will be removed in the future. We will continue to support the use of ingress controllers, and will further address this area in the future. Helm v4 is now supported to deploy our vCluster or Platform charts Istio 1.28 is now supported The vCluster helm chart now includes an optional PodDisruptionBudget. vCluster Breaking Changes In v0.26 we introduced a feature to auto-repair embedded etcd in specific situations. After further review and testing we have removed this feature, as in certain cases it can cause instability. This removal has been backported to versions: 0.30.2, 0.29.2, 0.28.1, 0.27.2, and 0.26.4. Network Policies have been significantly revised for enhanced security and flexibility, moving beyond the previous, egress-only approach. Additionally, the configuration now mirrors the standard Kubernetes NetworkPolicy specification. Enabling policies.networkPolicy.enabled now activates network policies and automatically deploys necessary default ingress and egress policies for vCluster. In comparison with the previous, control plane ingress/egress external traffic is denied by default. The configuration now includes distinct sections for granular control to support all tenancy models. Please refer to the docs for details. For a list of additional fixes and smaller changes, please refer to the release notes: https://github.com/loft-sh/vcluster/releases/tag/v0.31.0 https://github.com/loft-sh/loft/releases/tag/v4.6.0 For detailed documentation and migration guides, visit vcluster.com/docs and vcluster.com/docs/platform.
Platform v4.5 & vCluster v0.30 - Secure Cloud Bursting, On-Prem Networking, and Persistent Volume Snapshots
We’re excited to roll out vCluster Platform v4.5 and vCluster v0.30, two big releases packed with features that push Kubernetes tenancy even further. From hybrid flexibility to stronger isolation and smarter automation, these updates are another step toward delivering the most powerful and production-ready tenancy platform in the Kubernetes ecosystem. Platform v4.5 - vCluster VPN, Netris integration, UI Kubectl Shell, and more vCluster VPN (Virtual Private Network) We’ve just wrapped up the most significant shift in how virtual clusters can operate with our Future of Kubernetes Tenancy launch series, introducing two completely new ways of isolating tenants with vCluster Private Nodes and vCluster Standalone. With Private Nodes the control plane is hosted on a shared Kubernetes cluster while worker nodes can be joined directly into a virtual cluster. vCluster Standalone takes this further and allows you to run the control plane on dedicated nodes, solving the “Cluster One” problem. A networking requirement for both Private Nodes and Standalone is to expose the control plane somehow, typically via LoadBalancers or Ingresses to allow nodes to register themselves. This is easy to do if control plane and nodes are all on the same physical network but gets infinitely harder if they aren’t. vCluster VPN creates a secure and private connection between the virtual cluster control plane and Private Nodes using the networking technology that is developed by Tailscale. This eliminates the need to expose the virtual cluster control plane directly. Instead, you can create an overlay network for control plane ↔ node and node ↔ node communication. This makes vCluster VPN perfectly suited for scenarios where you intend to join nodes from different sources. A common challenge of on-prem Kubernetes clusters is providing burst capacity. Auto Nodes and vCluster VPN enable you to automatically provision additional cloud-backed nodes when demand exceeds local capacity. The networking between all nodes in the virtual cluster, regardless of their location, will be taken care of by vCluster VPN. Let’s walk through setting up a burst-to-cloud virtual cluster: First, create NodeProviders for your on-prem infrastructure, for example OpenStack, and a cloud provider like AWS. Next, create a virtual cluster with two node pools and vCluster VPN: # vcluster.yaml privateNodes: enabled: true # Expose the control plane privately to nodes using vCluster VPN vpn: enabled: true # Create an overlay network over all nodes in addition to direct control plane communication nodeToNode: enabled: true autoNodes: - provider: openstack static: # Ensure we always have at least 10 large on-prem nodes in our cluster - name: on-prem-nodepool quantity: 10 nodeTypeSelector: - property: instance-type value: "lg" - provider: aws # Dynamically join ec2 instances when workloads exceed our on-prem capacity dynamic: - name: cloud-nodepool nodeTypeSelector: - property: instance-type value: "t3.xlarge" limits: nodes: 20 # Enforce a maximum of 20 nodes in this NodePool Auto Nodes Improvements In addition to vCluster VPN, this release brings many convenience features and improvements to Auto Nodes. We're upgrading our Terraform Quickstart NodeProviders for AWS, Azure, and GCP to behave more like traditional cloud Kubernetes clusters by deploying native cloud controller managers and CSI drivers by default. To achieve this, we're introducing optional NodeEnvironments for the Terraform NodeProvider. NodeEnvironments are created once per provider per virtual cluster. They enable you to provision cluster-wide resources, like VPCs, security groups, and firewalls, and control plane specific deployments inside the virtual cluster, such as cloud controllers or CSI drivers. Emphasizing the importance of NodeEnvironments, we've updated the vcluster.yaml in v0.30 to allow easy central configuration of environments: # vcluster.yaml privateNodes: enabled: true autoNodes: # Configure the relevant NodeProviders environment and NodePools directly - provider: aws properties: # global properties, available in both NodeEnvironments and NodeClaims region: us-east-1 dynamic: - name: cpu-pool nodeTypeSelector: key: instance-type operator: "In" values: ["t3.large", "t3.xlarge"] IMPORTANT: You need to update the vcluster.yaml when migrating your virtual cluster from v0.29 to v0.30. Please take a look at the docs for the full specification. UI Kubectl Shell Platform administrators and users alike often find themselves in a situation where they just need to execute a couple of kubectl commands against a cluster to troubleshoot or get a specific piece of information from it. We’re now making it really easy to do just that within the vCluster Platform UI. Instead of generating a new kubeconfig, downloading it, plugging it into kubectl and cleaning up afterwards just to run kubectl get nodes, you can now connect to your virtual cluster right in your browser. The Kubectl Shell will create a new pod in your virtual cluster with a specifically crafted kubeconfig already mounted and ready to go. The shell comes preinstalled with common utilities like kubectl, helm, jq/yq, curl and nslookup. Quickly run a couple of commands against your virtual cluster and rest assured that the pod will be cleaned up automatically after 15 minutes of inactivity. Check out the docs for more information. Netris Partnership - Cloud-style Network Automation for Private Datacenters We’re excited to announce our strategic partnership with Netris, the company bringing cloud-style networking to private environments and on-prem datacenters. vCluster now integrates deeply into Netris and is able to provide hard physical tenant isolation on the data plane. Isolating networks is a crucial aspect of clustering GPUs as a lot of the value of GenAI is in the model parameters. The combination of vCluster and Netris allows you to keep data private and still give you the maximum amount of flexibility and maintainability, helping you to dynamically distribute access to GPUs across your external and internal tenants. Get started by reusing your existing tenant-a-net Netris Server Cluster and automatically join nodes into it by setting up a virtual cluster with this vcluster.yaml: # vcluster.yaml integrations: # Enable Netris integration and authenticate netris: enabled: true connector: netris-credentials privateNodes: enabled: true autoNodes: # Automatically join nodes with GPUs to the Netris Server Cluster for tenant A - provider: bcm properties: netris.vcluster.com/server-cluster: tenant-a-net dynamic: - name: gpu-pool nodeTypeSelector: - key: "bcm.vcluster.com/gpu-type" value: "h100" Keep an eye out for future releases as we’re expanding our partnership with Netris. The next step is to integrate vCluster even deeper and allow you to manage all network configuration right in your vcluster.yaml. Read more about the integration in the docs and our partnership announcement. Other Announcements & Changes AWS RDS Database connectors are now able to provision and authenticate using workload identity (IRSA and Pod Identity) directly through the vCluster Platform. Learn more in the docs. Breaking Changes As mentioned above you need to take action when you upgrade Auto Node backed virtual clusters from v0.29 to v0.30. Please consult the documentation. For a list of additional fixes and smaller changes, please refer to the release notes. For detailed documentation and migration guides, visit vcluster.com/docs/platform. vCluster v0.30 - Volume Snapshots & K8s 1.34 Support Persistent Volume Snapshots In vCluster v0.25, we introduced the Snapshot & Restore feature which allowed taking a backup of etcd via the vCluster CLI, and exporting to locations like S3 or OCI registries. Then in Platform v4.4 and vCluster v0.28 we expanded substantially on that by adding support inside the Platform. This lets users set up an automated system which schedules snapshots on a regular basis, with configuration to manage where and how long they are stored. Now we are introducing another feature: Persistent Volume Snapshots. By integrating the upstream Kubernetes Volume Snapshot feature, vCluster can include snapshots for persistent volumes which will be created and stored automatically by the relevant CSI driver. When used with the auto-snapshots feature, you will have a recurring stable backup feature that can manage disaster recovery at whatever pace you need it, including your workload’s persistent data. To use the new feature, first you’ll need to install an upstream CSI driver, and configure a default VolumeSnapshotClass. Then you’ll use the --include-volumes command when running with the CLI: vcluster snapshot create my-vcluster "s3://my-s3-bucket/snap-1.tar.gz" --include-volumes Or if using auto-snapshots you can set the volumes.enabled config to true: external: platform: autoSnapshot: enabled: true schedule: 0 * * * * volumes: enabled: true Now when a snapshot is completed, it will have backed up any volumes that are compatible with the CSI drivers installed in the cluster. These volumes will be stored by the CSI driver in the location of its storage, which is separate from the primary location of your snapshot. For example when using AWS’s EBS-CSI driver, volumes are backed up inside EBS storage, even though the primary snapshot may be in an OCI registry. To restore, simply add the --restore-volumes param, and the volumes will be re-populated inside the new virtual cluster: vcluster restore my-vcluster "s3://my-s3-bucket/snap-1.tar.gz" --restore-volumes Note that this feature is currently in Beta, is not recommended for use in mission-critical environments, and has limitations. We plan to expand this feature over time, so stay tuned for further features and enhancements which will make it usable across an even wider variety of deployment models and infrastructure. Other Announcements & Changes We have added support for Kubernetes 1.34, enabling users to take advantage of the latest enhancements, security updates, and performance improvements in the upstream Kubernetes release. As Kubernetes slowly transitions from Endpoints to Endpoint Slices, vCluster now supports both. Syncing EndpointSlices to a host cluster can now be done in the same manner as Endpoints. For a list of additional fixes and smaller changes, please refer to the release notes. For detailed documentation and migration guides, visit vcluster.com/docs.
vCluster v0.29 - vCluster Standalone
The third, and final release in our Future of Kubernetes Tenancy launch series has arrived, but first let’s recap the previous two installments. Private Nodes and Auto Nodes established a new tenancy model that entirely shifts how vCluster can be deployed and managed: Private Nodes allows users to join external Kubernetes nodes directly into a virtual cluster, ensuring workloads run in a fully isolated environment with separate networking, storage, and compute. This makes the virtual cluster behave like a true single-tenant cluster, removing the need to sync objects between host and virtual clusters. Auto Nodes, built on the open-source project Karpenter, brings on-demand node provisioning to any Kubernetes environment - whether that is in a public cloud, on-premise, bare metal, or multi-cloud. It includes support for Terraform/OpenTofu, KubeVirt, and NVIDIA BCM as Node Providers, making it incredibly flexible. Auto Nodes will rightsize your cluster for you, easing the maintenance burden and lowering costs. Today’s release yet again expands how vCluster can operate. Up until now, vCluster has always needed a host cluster which it is deployed into. Even with private nodes, the control-plane pod lives inside a shared cluster. For many people and organizations this left a “Cluster One” problem: where and how would they host this original cluster? vCluster Standalone provides the solution to this problem. Standalone runs directly on a bare metal or VM node, spinning up the control plane and its components as a process running directly on the host using binaries rather than launching the control plane as a pod on an existing cluster. This gives you the freedom and flexibility to install in any environment, without additional vendors. After an initial control plane node is up, you can easily make the control plane highly available by joining additional control plane nodes to the cluster and enabling embedded etcd across them. Joining control plane nodes works the same way as joining worker nodes using the private nodes approach for joining nodes. Besides manually adding worker nodes to a Standalone cluster, you can also use Auto Nodes to automatically provision worker nodes on demand. Let’s look at an example. SSH into the machine you want to use for the first control plane node and create a vcluster.yaml file as shown below. In this case we’ll also enable joinNode to use this host as a worker node in the cluster in addition to being a control plane node: # vcluster.yaml controlPlane: # Enable standalone standalone: enabled: true # Optional: Control Plane node will also be considered a worker node joinNode: enabled: true # Required for adding additional worker nodes privateNodes: enabled: true Now, let’s run this command to bootstrap our vCluster on this node: sudo su - export VCLUSTER_VERSION="v0.29.0" curl -sfL https://github.com/loft-sh/vcluster/releases/download/${VCLUSTER_VERSION}/install-standalone.sh | sh -s -- --vcluster-name standalone --config ${PWD}/vcluster.yaml A kubeconfig will be automatically configured, so we can directly use our new cluster and check out its nodes: $ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-192-168-1-101 Ready control-plane,master 10m v1.32.1 That’s it, you now have a new Kubernetes cluster! From here you can join additional worker nodes, or additional control-plane nodes to make the cluster highly available. To add additional nodes to this cluster, SSH into them and run the node join command. You can generate a node join command by ensuring your current kube-context is using your newly created cluster and then running this command: # Generate Join Command For Control Plane Nodes (for HA) vcluster token create --control-plane # Generate Join Command For Worker Nodes vcluster token create To manage or update the vCluster simply adjust /var/lib/vcluster/config.yaml and then restart the vCluster service using this command: systemctl restart vcluster.service vCluster Standalone streamlines your infrastructure by removing the need for external providers or distros. It delivers a Kubernetes cluster that is as close to upstream vanilla Kubernetes as possible while adding additional convenience features such as etcd self-healing and others available to any vCluster. Launching a vCluster Standalone cluster allows you to bootstrap the initial cluster required to host your virtual clusters and vCluster Platform but it can also be used to run end-user workloads directly. See the documentation for further details, full configuration options and specs. Notable Improvements Setting new fields via patches Until now, virtual clusters only supported patching existing fields. With v0.29 we have added functionality to set new fields as well, including conditionally based on an expression. This broadens how pods can interact with existing mechanisms inside the host cluster. For more information see the docs about adding new keys. Other Announcements & Changes Etcd has been upgraded to v3.6. The upstream etcd docs state that before upgrading to v3.6, you must already be on v3.5.20 or higher, otherwise failures may occur. This version of etcd was introduced in vCluster v0.24.2. Please ensure that you have upgraded to at least that version, before upgrading to v0.29. The External Secret Operator integration configuration was previously reformatted, and the original config options have now been removed. You must convert to the new format before upgrading to v0.29. For more information please see the docs page. For a list of additional fixes and smaller changes, please refer to the release notes. This release completes our Future of Kubernetes Tenancy launch series. We hope you’ve enjoyed following us this summer, and experimenting with all the new features we’ve launched. As we head towards fall, we hope to hear more from you, and look forward to speaking directly at the upcoming KubeCon + CloudNativeCon conferences in Atlanta and Amsterdam!
Platform v4.4 and vCluster 0.28 - Auto Nodes, Auto Snapshots and More
vCluster Platform v4.4 - Auto Nodes In our last release, we announced Private Nodes for virtual clusters, marking the most significant shift in how vCluster operates since its inception in 2021. With Private Nodes, the control plane is hosted on a shared Kubernetes cluster, while worker nodes can be joined directly to a virtual cluster, completely isolated from other tenants and without being part of the host cluster. These nodes exist solely within the virtual cluster, enabling stronger separation and more flexible architecture. Now, we’re taking Private Nodes to the next level with Auto Nodes. Powered by the popular open-source project Karpenter and baked directly into the vCluster binary, Auto Nodes makes it easier than ever to dynamically provision nodes on demand. We’ve simplified configuration, removed cloud-specific limitations, and enabled support for any environment—whether you’re running in public clouds, on-premises, bare metal, or across multiple clouds. Auto Nodes brings the power of Karpenter to everyone, everywhere. It’s based on the same open-source engine that powers EKS Auto Mode, but without the EKS-only limitations—bringing dynamic, on-demand provisioning to any Kubernetes environment, across any cloud or on-prem setup. Auto Nodes is easy to get started with, yet highly customizable for advanced use cases. If you’ve wanted to use Karpenter outside of AWS or simplify your setup without giving up power, Auto Nodes delivers. Auto Nodes is live today—here’s what you can already do: Set up a Node Provider in vCluster Platform: We support Terraform/OpenTofu, KubeVirt, and NVIDIA BCM. Define Node Types (use Terraform/OpenTofu Node Provider for any infra that has a Terraform provider, including public clouds but also private cloud environments such as OpenStack, MAAS, etc.) or use our Terraform-based quickstart templates for AWS, GCP and Azure. Add autoNodes config section to a vcluster.yaml (example snippet below). See Karpenter in action as it dynamically creates NodeClaims and the vCluster Platform fulfills them using the Node Types available to achieve perfectly right-sized clusters. Bonus: Auto Nodes will also handle environment details beyond the nodes itself, including setting up LoadBalancers and VPCs as needed. More details in our sample repos and the docs. # vcluster.yaml with Auto Nodes configured privateNodes: enabled: true autoNodes: dynamic: - name: gcp-nodes provider: gcp requirements: property: instance-type operator: In values: ["e2-standard-4", "e2-standard-8"] - name: aws-nodes provider: aws requirements: property: instance-type operator: In values: ["t3.medium", "t3.large"] - name: private-cloud-openstack-nodes provider: openstack requirements: property: os value: ubuntu Demo To see Auto Nodes in action with GCP nodes, watch the video below for a quick demo. Caption: Auto scaling virtual cluster on GCP based on workload demand Get Started To harness the full power of Karpenter and Terraform in your environment we recommend to fork our starter repositories and adjust them to meet your requirements. Public clouds are a great choice for maximum flexibility and scale, though sometimes they aren’t quite the perfect fit. Specially when it comes to data sovereignty, low-latency local compute or bare metal use cases. We’ve developed additional integrations to bring the power of Auto Nodes to your on-prem and bare metal environments: NVIDIA BCM is the management layer for NVIDIA’s flagship supercomputer offering DGX SuperPOD, a full-stack data center platform that includes industry-leading computing, storage, networking and software for building on-prem AI factories. KubeVirt enables you to break up large bare metal servers into smaller isolated VMs on demand. Coming Soon: vCluster Standalone Auto Nodes mark another milestone in our tenancy models journey but we’re not done yet. Stay tuned for the next level to increase tenant isolation: vCluster Standalone. This deployment model allows you to run vCluster as a binary on a control plane node or a set of multiple control plane nodes for HA similar to how traditional distros such as RKE2 or OpenShift are deployed. vCluster is designed to run control planes in containers but many of our customers face the Day 0 challenge of having to spin up the host cluster to run the vCluster control plane pods on top of. With vCluster Standalone we now answer our customers’ demand for us to help them spin up and manage this host cluster with the same tooling and support that they receive for their containerized vCluster deployments. Stay tuned for this exciting announcement planned for October 1. Other Changes Notable Improvements Auto Snapshots Protect your virtual clusters with automated backups via vCluster Snapshots. Platform-managed snapshots run on your defined schedule, storing virtual cluster state to S3 compatible buckets or OCI registries. Configure custom retention policies to match your backup requirements and restore virtual clusters from snapshots quickly. Check out the documentation to get started. In our upcoming releases we will be unveiling more capabilities around adding persistent storage as part of these snapshots but for now this is limited to the control plane state (K8s resources). Require Template Flow Improves your tenant self-serve experience in the vCluster Platform UI by highlighting available vCluster Templates in projects. Assign icons to templates and make them instantly recognizable Read more about projects and self-serving virtual clusters in the docs. Air-Gapped Platform UI Air-gapped deployments come with a unique set of challenges, from ensuring you’re clusters are really air tight over pre-populating container images and securely accessing the environment. We’re introducing a new UI setting to give you fine grained control over what external URLs the vCluster Platform UI can access. This helps you to validate that what you see in the web interface matches what your pods can access. Read more about locking down the vCluster Platform UI. vCluster v0.28 In parallel with Platform v4.4, we’ve also launched vCluster v0.28. This release includes support for Auto Nodes and other updates mentioned above, plus additional bug fixes and enhancements. Other Announcements & Changes The Isolated Control Plane feature, configured with experimental.isolatedControlPlane, was deprecated in v0.27, and has been removed in v0.28. The experimental feature Generic Sync has been removed as of v0.28. The External Secret Operator integration configuration has been reformatted. Custom resources have migrated to either the fromHost or toHost section, and the enabled option has been removed. This will clarify the direction of sync, and matches our standard integration config pattern. The previous configuration is now deprecated, and will be removed in v0.29. For more information please see the docs page. For a list of additional fixes and smaller changes, please refer to the Platform release notes and the vCluster release notes. What's Next This release marks another milestone in our tenancy models journey. Stay tuned for the next level to increase tenant isolation: vCluster Standalone. For detailed documentation and migration guides, visit vcluster.com/docs.
vCluster v0.27 - Dedicated Clusters with Private Nodes
The first installment of our Future of Kubernetes launch series is here, and we couldn’t be more excited. Today, we introduce Private Nodes, a powerful new tenancy model that gives users the option to join Kubernetes nodes directly into a virtual cluster. If you use this feature, it means that all your workloads will run in an isolated environment, separate from other virtual clusters, which produces a higher level of security and avoids potential interference or scheduling issues. In addition, when using private nodes, objects no longer need to be synced between the virtual and host cluster. The virtual cluster behaves closer to a traditional cluster: Completely separate nodes, separate CNI, separate CSI, etc. A virtual cluster with private nodes is effectively an entirely separate single-tenant cluster itself. The main difference to a traditional cluster remains the fact that the control plane runs inside a container rather than as a binary directly on a dedicated control plane master node. This is a giant shift in how vCluster can operate, and it opens up a host of new possibilities. Let’s take a high-level look at setting up a virtual cluster with private nodes. For brevity, only partial configuration is shown here. First, we enable privateNodes in the vcluster.yaml: privateNodes: enabled: true Once created, we join our private worker nodes to the new virtual cluster. To do this, we need to follow two steps: Connect to our virtual cluster and generate a join command: vcluster connect my-vcluster # Now within the vCluster context, run: vcluster token create --expires=1h Now SSH into the node you would like to join as a private node and run the command that you received from the previous step. It looks similar to this command: curl -sfLk https:// /node/join?token= | sh - Run the above command on any node and it will join the cluster — that’s it! The vCluster control-plane can also manage and automatically upgrade each node when the control plane itself is upgraded, or you can choose to manage them independently. Please see the documentation for more information and configuration options. Coming Soon: Auto Nodes For Private Nodes Private Nodes give you isolation but we won’t stop there. One of our key drivers was always to make Kubernetes architecture more efficient and less wasteful on resources. As you may notice, Private Nodes allows you now to use vCluster for creating single-tenant clusters which might increase the risk for underutilized nodes and wastes resources. That’s why we’re working on an even more exciting feature that builds on top of Private Nodes: We call it Auto Nodes. With the next release, vCluster will be able to dynamically provision private nodes for each virtual cluster using a built-in Karpenter instance. Karpenter is the open source Kubernetes node autoscaler developed by AWS. With vCluster Auto Nodes, your virtual cluster will be able to scale on auto-pilot just like EKS Auto Mode does in AWS but it will work anywhere, not just in AWS with EC2 nodes. Imagine auto-provisioning nodes from different cloud providers, in your private cloud and even on bare metal. Combine Private Nodes and Auto Nodes, and you get the best of both worlds: maximum hard isolation and dynamic clusters with maximum infrastructure utilization, backed by the speed, flexibility, and automation benefits of virtual clusters — all within your own data centers, cloud accounts and even in hybrid deployments. To make sure you’re the first the know once Auto Nodes will be available, subscribe to our Launch Series: The Future of Kubernetes Tenancy. Other Changes Notable Improvements Our CIS hardening guide is now live. Based on the industry-standard CIS benchmark, this guide outlines best practices tailored to vCluster’s unique architecture. By following it, you can apply relevant controls which improve your overall security posture, and can better align with security requirements. The vCluster CLI now has a command to review and rotate certificates. This includes the client, server, and CA certificates. Client certificates only remain valid for 1 year by default. See the docs or vcluster certs -h for more information. This command already works with private nodes but requires additional steps. We have added support for Kubernetes 1.33, enabling users to take advantage of the latest enhancements, security updates, and performance improvements in the upstream Kubernetes release. Other Announcements & Changes Breaking Change: The sync.toHost.pods.rewriteHosts.initContainer configuration has been migrated from a string to our standard image object format. This only needs to be addressed if you are currently using a custom image. Please see the docs for more information. The Isolated Control Plane feature, configured with experimental.isolatedControlPlane has been deprecated as of v0.27, and will be removed in v0.28. In an upcoming minor release we will be upgrading etcd to v3.6. The upstream etcd docs state that before upgrading to v3.6, you must already be on v3.5.20 or higher, otherwise failures may occur. This version of etcd was introduced in vCluster v0.24.2. Please ensure you plan to upgrade to, at minimum, v0.24.2 in the near future. The External Secret Operator integration configuration has been reformatted. Custom resources have migrated to either the fromHost or toHost section, and the enabled option has been removed. This will clarify the direction of sync, and matches our standard integration config pattern. The previous configuration is now deprecated, and will be removed in v0.29. For more information please see the docs page. When syncing CRDs between the virtual and host clusters, you can now specify the API Version. While not a best-practice, this can provide a valuable alternative for specific use-cases. See the docs for more details. For a list of additional fixes and smaller changes, please refer to the release notes.