Latest changes
vCluster v0.26 - Namespace Syncing and Hybrid-Scheduling
Our newest release provides two new features to make managing and scheduling workloads inside virtual clusters even easier, along with several important updates. Namespace Syncing In a standard vCluster setup, when objects are synced from the virtual cluster to the host cluster their names are translated to ensure no conflicts occur. However some use cases have a hard requirement that names and namespaces must be static. Our new Namespace Syncing feature has been built specifically for this scenario. The namespace and any objects inside of it will be retained 1:1 when synced from the virtual cluster context to the host cluster if they follow user-specified custom mapping rules. Let’s look at an example: sync: toHost: namespaces: enabled: true mappings: byName: "team-*": "host-team-*" If I use the vcluster.yaml shown above and then create a namespace called team-one-application, it will be synced to the host as host-team-one-application. Any objects inside of this namespace which have toHost syncing enabled will then be synced 1:1 without changing the names of these objects. This allows users to achieve standard naming conventions for namespaces no matter the location, allowing for an alternative way of operating virtual clusters. Several other configurations, patterns, and restrictions exist. Please see the documentation for more information. Also note that this feature replaces the experimental multi-namespace-mode feature with immediate effect. Hybrid-Scheduling Our second headline feature focuses on scheduling. Currently we allow scheduling using the host or virtual scheduling, but not both at the same time. This was restrictive for users who wanted a combination of both, allowing a pod to be scheduled by a custom scheduler on either the virtual cluster or the host, or falling back to the host. Enter hybrid-scheduling: sync: toHost: pods: hybridScheduling: enabled: true hostSchedulers: - kai-scheduler The above config informs vCluster that if a pod would like to use the kai-scheduler, it exists on the host cluster, and will be forwarded to it. Any specified scheduler not listed in the config is assumed to be running and operating directly inside the virtual cluster and pods remain pending there to be scheduled before being synced to the host cluster. Finally any pod which does not specify a scheduler will use the host’s default scheduler. Therefore if a new custom scheduler is needed, you only need to deploy the scheduler inside the virtual cluster and set its name on the pod spec: apiVersion: v1 kind: Pod metadata: name: server spec: schedulerName: my-new-custom-scheduler Alternatively if the virtual scheduler option is enabled, the virtual cluster’s default scheduler will be used instead of the hosts, which means all scheduling will occur inside the virtual cluster. Note that his configuration has moved from advanced.virtualScheduler to controlPlane.distro.k8s.scheduler.enabled. This feature and it’s various options give greater scheduling freedom to both users who only have access to the virtual cluster and to platform teams who would like to extend certain schedulers to their virtual clusters. Other Changes Notable Improvements Class objects synced from the host can now be filtered by selectors or expressions. More notably, they are now restricted based on that filter. If a user attempts to create an object referencing a class which doesn’t match with the selectors or expressions, meaning it was not imported into their virtual cluster, then syncing it to the host will now fail. This includes ingress-class, runtime-class, priority-class, and storage-classes. Our embedded etcd feature will now attempt to auto-recover in situations where possible. For example if there are three etcd-nodes and one fails, but there are still two to maintain quorum, the third node will be auto-replaced. Breaking Changes The k0s distro has been fully removed from vCluster. See the announcement in our v0.25 changelog for more information. As noted above, the experimental multi-namespace-mode feature has been removed. Fixes & Other Changes Please note that the location of the config.yaml in the vCluster pod has moved from /var/vcluster/config.yaml to /var/lib/vcluster/config.yaml. v0.25 introduced a slimmed down images.txt asset on the release object, with a majority of images moving into images-optional.txt In v0.26 we have additionally moved etcd, alpine, and k3s images into the optional list, leaving the primary list as streamlined as possible. The background-proxy image can now be customized, as well as the syncer’s liveness and readiness probes. Several key bugs have been fixed, including issues surrounding vcluster connect, creating custom service-cidrs, and using vcluster platform connect with a direct cluster endpoint. For a list of additional fixes and smaller changes, please refer to the release notes.
vCluster Platform v4.3 - Postgres & Aurora Support, Cost Control Enhancements, Sleep Mode Integration, and UI Upgrades
vCluster Platform v4.3 brings powerful new capabilities across scalability, cost efficiency, and user experience—making it easier than ever to run, manage, and optimize virtual clusters at scale. External Database Connector - Support for Postgres and Aurora and UI support Virtual clusters always require a backing store. If you explore vCluster with the default settings and in the most lightweight form possible, your virtual clusters’ data is stored in a SQLite database which is a single-file database that is typically stored in a persistent volume mounted into your vCluster pod. However, for users that want more scalable and resilient backing stores, vCluster also supports: etcd (deployed outside of your virtual cluster and self-managed) Embedded etcd (runs an etcd cluster as part of your virtual cluster pods fully managed by vCluster) External databases (MySQL or Postgres databases) External database connector (MySQL or Postgres database servers including Amazon Aurora In v4.1, we introduced the External Database Connector feature with only MySQL support, but now we’ve extended support for Postgres database servers. We have also introduced a UI flow to manage your database connectors and select them directly when creating virtual clusters. If you want to learn more about External Database Connectors, view the documentation. UI Updates There have been major refreshes in different parts of our UI to help the user experience. The navigation has been updated to quickly find project settings and secrets, but to also allow customization of the colors of the platform. Permission Management Admins can now take advantage of the new permission page to easily view in one source all the permissions granted to the user or team. From this view, it’s also easy to add or edit permissions in one central location. In addition to the new permission page, the pages to create and edit users, teams and global roles have all been updated to improve the experience of creating those objects. New Pages for vCluster and vCluster Templates Creating virtual clusters or virtual cluster templates from the platform has always been critical in quick and easy virtual cluster creation. With vCluster v0.20, the vcluster.yaml has become the main configuration file and in vCluster Platform we have introduced an easy way to edit the file as well as use UI to populate/create the file. Cost Control In v4.2, we introduced a cost savings dashboard. In this release, the metrics and queries collection were re-formatted for better performance and maintenance. Note: with these changes, all metrics collection will be reset upon upgrading. Sleep Mode Support for vCluster v0.24+ In vCluster v0.24, we introduced the ability to sleep virtual clusters directly in the vcluster.yaml. In previous versions of the platform, that feature of vCluster was not compatible with the platform and those values were not accounted. Now, when a virtual cluster is using sleep mode, the platform will read the configuration and if the agent is deployed on the host cluster, it takes over managing sleep mode as if it had been configured for the platform. For a list of additional fixes and smaller changes, please refer to the release notes.
vCluster v0.25 - Istio Integration
Our newest release of vCluster includes a boatload of new features, updates, and some important changes. Foremost, we are excited to announce our integration with Istio! Istio Integration Istio has long been a cornerstone of service-mesh solutions within the CNCF community, and we are thrilled to introduce an efficient solution built directly into vCluster. This integration eliminates the need to run separate Istio installations inside each virtual cluster. Instead it enables all virtual clusters to share a single host installation, creating a simpler and more cost-effective architecture that is easier to maintain. Our integration works by syncing Istio’s DestinationRules, VirtualServices, and Gateways from the virtual cluster into the host cluster. Any pods created in a virtual-cluster namespace labeled with istio.io/dataplane-mode will have that label attached when they are synced to the host cluster. And finally a Kubernetes Gateway resource is automatically deployed to the virtual-clusters host namespace in order to be used as a Waypoint proxy. Pods will then be automatically included in the mesh. integrations: istio: enabled: true Please note that the integration uses Ambient mode directly, and is not compatible with sidecar mode. See the Istio Integration docs for more info, pre-requisites, and configuration options. Support for Istio in Sleep Mode Along with our Istio integration comes direct-support with our vCluster-native workload sleep feature. Once the Istio integration is set up on the virtual cluster, and Sleep Mode is enabled, workloads that aren’t receiving traffic through the mesh can be automatically spun down, and once traffic is received they will scale back up again. This allows for an Istio-only ingress setup, one which doesn’t use a standard operator such as ingress-nginx, to take advantage of our Sleep Mode feature. See the docs for more information on how this can be configured. In many cases it will be as simple as the following: sleepMode: enabled: true autoSleep: afterInactivity: 30s integrations: istio: enabled: true Notable changes and improvements vCluster K8s Distribution Deprecations and Migration Path Due to the complications of maintaining and testing across several Kubernetes distributions, and their divergence from upstream, both k0s and k3s are now deprecated as vCluster control plane options in v0.25. This leaves upstream Kubernetes as the recommended and fully supported option. In v0.26 k0s will be removed, however k3s will remain as an option for some time in order to give users a chance to migrate to upstream Kubernetes. To assist with this change, we have added another way to migrate from k3s to k8s, beyond our recent Snapshot&Restore feature which was released in v0.24. This new feature allows changing just the vcluster.yaml, see the docs for more details. Starting with a k3s vCluster config: controlPlane: distro: k3s: enabled: true You can now simply update that distro config to use k8s instead, and then upgrade: controlPlane: distro: k8s: enabled: true Please be aware that this only works for migrating from k3s to k8s, and not the other way around. InitContainer and Startup changes Our initContainer process has been revamped to use a single image, as opposed to three. This not only simplifies the config.yaml around custom control-plane settings, but will lower startup time by a significant margin. See the PR for further information. This will be a breaking change if any custom images were used, specifically under the following paths, but note that others like extraArgs or enabled are not changing: controlPlane: distro: k8s: controllerManager: ... scheduler: ... apiServer: ... image: tag: v1.32.1 All three images can now be set from a single variable: controlPlane: distro: k8s: image: tag: v1.32.1 Images.txt and Images-optional.txt updates for custom registries The images.txt file is generated as an asset on every vCluster release to facilitate the AirGap registry process, and is also useful for informational purposes. However in recent releases the file was getting overly long and convoluted. This can lead to a substantial amount of unnecessary bandwidth and hassle for users who migrate every image to their private registry. Starting in v0.25 the file will be renamed to vcluster-images.txt and only contain the latest version of each component. Second, a new images-optional.txt will be added to the assets, and this will contain all additional possible component images. This will make first-time installs much easier, and allow current users to select only the images they need. Finally, unsupported Kubernetes versions have been removed, which further reduced the size of the file. Update to database connector in vcluster.yaml required If using an external database connector or external datasource in version v0.24.x or earlier, configuration was possible without using the enabled flag: controlPlane: backingStore: database: external: connector: test-secret --or-- datasource: "mysql://root:password@tcp(0.0.0.0)/vcluster" Before upgrading or starting a v0.25.0 virtual cluster, you must also set enabled: true, otherwise a new sqlite database will be used instead. See this issue for more details and updates in the future. controlPlane: backingStore: database: external: enabled: true # required connector: test-secret Other Changes Fixes & Other Changes As stated in our v0.23 announcement, deploying multiple virtual clusters per-namespace has been deprecated. When using v0.25 and beyond, the virtual cluster pod will no longer start. Some commands or configurations, such as using the patches feature, were not correctly checking for licensing on command execution, instead only erroring out in the logs. Those have been moved to command execution, so that any issues with licensing can be quickly surfaced and resolved. For a list of additional fixes and smaller changes, please refer to the release notes.
Announcing: vCluster’s Rancher OSS Integration
Today, we’re happy to announce our Open Source vCluster Integration for Rancher. It allows creating and managing virtual clusters via the Rancher UI in a similar way as you would manage traditional Kubernetes clusters with Rancher. Why we built this For years, Rancher users have opened issues in the Rancher GitHub repos and asked for a tighter integration between Rancher and vCluster. At last year’s KubeCon in Paris, we took the first step to address this need by shipping our first Rancher integration as part of our commercial vCluster Platform. Over the past year, we have seen many of our customers benefit from this commercial integration but we also heard from many vCluster community users that they would love to see this integration as part of our open source offering. We believe that more virtual clusters in the world are a net benefit for everyone: happier developers, more efficient infra use, less wasted resources and less energy consumption leading to a reduced strain on the environment. Additionally, we realized that rebuilding our integration as a standalone Rancher UI extension and a lightweight controller for managing virtual clusters would be even easier to install and operate for Rancher admins. So, we decided to just do that and design a new vCluster integration for Rancher from scratch. Anyone operating Rancher can now offer self-service virtual clusters to their Rancher users by adding the vCluster integration to their Rancher platform. See the gif below for how this experience looks from a user’s perspective. How the new integration works Using the integration requires you to take the following steps: Have at least one functioning cluster running in Rancher which can serve as the host cluster for running virtual clusters in. Installing the two parts of the vCluster integration inside Rancher Our lightweight operator Our Rancher UI extension Granting users access to a project or namespace in Rancher, so they can deploy a virtual cluster in there. That’s it! Under the hood, when users deploy a virtual cluster via the UI extension, we’re deploying the regular vCluster Helm chart and the controller will automatically detect any virtual cluster (whether deployed via the UI, Helm, or otherwise) and connect them as clusters in Rancher, so users can manage these virtual clusters just like they would manage any other cluster in Rancher. Additionally, the controller takes care of permissions: any members of the Rancher project that the respective virtual cluster was deployed into will automatically be added to the new Rancher cluster by configuring Rancher’s Roles. And that’s it. No extra work, no credit card required. You have a completely open-source and free solution for self-service virtual clusters in Rancher. As lightweight and easy as self-service namespaces in Rancher but as powerful as provisioning separate clusters for your users. Next Steps In the long run, we plan to migrate users of our previous commercial Rancher integration to the new OSS integration but for now, there are a few limitations that still need to be worked on in the OSS integration in order to achieve feature parity. One important piece is the sync between projects and project permissions in Rancher and projects and project permissions in the vCluster Platform. Another one is the sync of users and SSO credentials. We’re actively working on these features. Subscribe to our changelog and you’ll be the first to know when we’ll have all of this ready for you to use. Please note that deploying both plugins at the same time is not supported, as they are not compatible with each other. For further installation instructions see the following: vcluster-rancher-extension-ui vcluster-rancher-operator
Announcing vNode: Stronger Multi-Tenancy for Kubernetes with Node-Level Isolation
We’re excited to introduce vNode, a new product from LoftLabs that brings secure workload isolation to the node layer of Kubernetes. vNode enables platform engineering teams to enforce strict multi-tenancy inside shared Kubernetes clusters—without the cost or complexity of provisioning separate physical nodes. Why We Built vNode Most teams face a painful trade-off in Kubernetes multi-tenancy: share nodes and risk security vulnerabilities, or isolate workloads on separate nodes and waste resources. vNode breaks this trade-off by introducing lightweight virtual nodes that provide strong isolation without performance penalties or infrastructure sprawl. With vNode, teams can: Enforce tenant isolation at the node level, preventing noisy neighbor issues and improving security. Run privileged workloads safely—like Docker-in-Docker or Kubernetes control planes—inside shared infrastructure. Meet compliance needs by eliminating shared kernel risks. Avoid the overhead of VMs, syscall translation, or re-architecting their Kubernetes environments. How It Works vNode introduces a lightweight runtime that runs alongside containerd, using Linux user namespaces to isolate workloads. Each physical node is partitioned into multiple secure virtual nodes, providing stronger multi-tenancy inside shared clusters. It integrates seamlessly with any Kubernetes distribution that uses containerd (on Linux kernel 6.1+). Better Together: vNode + vCluster vNode complements our existing product, vCluster, by adding node-level isolation to virtual clusters. Together, they provide full-stack multi-tenancy—isolating both control planes and workloads within the same shared cluster. Join the Private Beta We’re currently rolling out vNode through a private beta. Be among the first to try it out. Sign up for early access at vNode.com
vCluster v0.24 - Snapshot & Restore and Sleep Mode improvements
Spring is just around the corner and you know what that means, KubeCon Europe is almost here! We have some exciting features to announce, and not just in vCluster. Swing by our booth to find out all the details, but here’s a little preview of what just shipped a few weeks before KubeCon London: Snapshot & Restore Backing up and restoring a virtual cluster has been possible for some time with Velero but that solution has it’s drawbacks. The restore paradigm doesn’t work seamlessly while the virtual cluster pod is running, it can be slow, and has limited use cases. Several users have requested a more full-featured backup process, and today we are happy to announce a solution. Snapshots are now a built-in feature of vCluster. These are quick, lightweight bundles created directly through the vCluster CLI, with a variety of options to make your life easier. Taking a snapshot and exporting it directly to an OCI compliant registry, for example, is done with a simple command: vcluster snapshot my-vcluster oci://ghcr.io/my-user/example-repo:snap-one Each snapshot includes the etcd database content, the helm release, and the vcluster.yaml configuration. Restoring is just as easy: vcluster restore my-vcluster oci://ghcr.io/my-user/my-repo:my-tag You can use snapshot and restore for the the following use cases: Restoring in-place, i.e. reverting a virtual cluster to a previous state based on a previous snapshot Migrating between config options that don’t have a direct migration path such as changing the backing store of a virtual cluster Making several copies of the same virtual cluster using the same snapshot (almost as if you were using a snapshot to package a virtual cluster for distribution) Migrating a virtual cluster to from one Kubernetes cluster to another one (please note that snapshots currently don’t support PV migrations) We anticipate the flexibility this feature provides will create many unique implementations. As of today this feature supports saving to an OCI registry, S3, or to it’s local storage with the container protocol. Please see the docs for more information, including current limitations, and further configuration. Sleep Mode Parity In our v0.22 release we announced vCluster-Native Sleep Mode. This feature allowed virtual clusters which were deployed outside of our platform, and without an agent, the ability to take advantage of sleep mechanisms by putting workloads to sleep while leaving the control plane running. With v0.24 and our upcoming Platform 4.3 release, this vCluster-Native Sleep Mode will be combined with our original Platform-based Sleep Mode in order to make a single unified solution which contains the best of both worlds. Once your virtual cluster is configured with Sleep Mode (see example below), it and the Platform will work together to shut down as many components as possible. Without an agent it will shut down only workloads, and with an agent it will also sleep the control plane. sleepMode: enabled: true autoSleep: afterInactivity: 1h exclude: selector: labels: dont: sleep This new feature will once again be known simply as Sleep Mode. Please see the docs for instructions on how to migrate your configuration, and how to upgrade both the Platform (once it is available in the Platform 4.3 release) and your virtual cluster. Other Changes Notable Improvements The Export Kube-Config feature has been improved to allow for more than one secret to be added. This change also aims to remove confusion, as the config itself clarifies how both the default secret and the additionalSecrets are set. In the below example you can see the default secret and multiple additional secrets being configured. The original exportKubeConfig.secret option is now deprecated, and will be removed in a future version. exportKubeConfig: context: domain-context server: https://domain.org:443 additionalSecrets: - namespace: alternative-namespace name: vc-alternative-secret context: alternative-context server: https://domain.org:443 - namespace: ... Fixes & Other Updates In the announcement for version v0.20 we noted that **ghcr.io/loft-sh/vcluster-pro** would be the default image in vCluster. To avoid confusion the loft-sh/vcluster image is now deprecated, and in future versions will no long be updated. Instead the loft-sh/vcluster-oss will continue to be built for public use and can be used as a replacement. As stated in our v0.23 announcement, deploying multiple virtual clusters per-namespace is now deprecated. When using v0.24, the virtual cluster pod will not start unless you enable the reuseNamespace option in your vCluster’s config.yaml. This functionality will soon be removed entirely. For a list of additional fixes and smaller changes, please refer to the release notes.
vCluster v0.23 - Expanded fromHost resource syncing and support for Kubernetes v1.32
The v0.23 release of vCluster introduces powerful new capabilities for syncing resources from the host cluster, including secrets, configMaps, and namespaced custom resources. Additionally, this update brings support for Kubernetes v1.32 and several key improvements for stability and usability. Let’s dive in! fromHost resource syncing Perhaps the most integral feature of vCluster is syncing resources to and from the host cluster. Our team has been focusing on making consistent progress in this area to provide functionality that will help our users the most. With v0.23 three new resources can be synced from the host cluster to the virtual cluster: secrets, configMaps, and namespaced custom resources. While cluster-scoped custom resources can already be synced today, they provided limited functionality. The ability to create and sync any namespaced custom resource opens up many new use cases and integrations. Similarly, the ability to sync secrets or configMaps from separate host namespaces into your virtual cluster allows expanded configuration and deployment options. See the example vcluster.yaml below showing how to sync: Secrets from namespace foo in the host cluster to namespace bar in the virtual cluster Custom resources example.demo.vcluster.com from host namespace to the default namespace in the virtual cluster sync: fromHost: secrets: enabled: true mappings: byName: # syncs all Secrets from "foo" namespace # to the "bar" namespace in a virtual cluster. Secret names are unchanged. "foo/*": "bar/*" customResources: example.demo.vcluster.com: enabled: true mappings: byName: # syncs all `example` objects from vCluster host namespace # to the "default" namespace in the virtual cluster "": "default" For more information, see the docs on ConfigMaps, Secrets, and Custom Resources. Support for Kubernetes v1.32 In this release, we’ve also added support for Kubernetes v1.32, enabling users to take advantage of the latest enhancements, security updates, and performance improvements in the upstream Kubernetes release. However, please be aware that this update does not extend to k0s. Notable Improvements Intermittent connection interruptions between virtual clusters and their platform will not disrupt the usage of pro features, and will be handled more gracefully. PriorityClass can now be automatically applied to workloads. Other Changes Please note that any Node objects created on the virtual cluster will no longer be automatically removed, and must be manually cleaned up. Deploying multiple virtual clusters per-namespace is now deprecated. When it is detected, the following will occur: When using v0.23 a warning will be logged When using v0.24 the virtual cluster pod will not start unless you enable the reuseNamespace option in your vCluster’s config.yaml In v0.25 this functionality will no longer be supported, the reuseNamespace option will be removed, and the virtual cluster pod will no longer start. For a list of additional fixes and smaller changes, please refer to the release notes.