Latest changes
Announcing: vCluster’s Rancher OSS Integration
Today, we’re happy to announce our Open Source vCluster Integration for Rancher. It allows creating and managing virtual clusters via the Rancher UI in a similar way as you would manage traditional Kubernetes clusters with Rancher. Why we built this For years, Rancher users have opened issues in the Rancher GitHub repos and asked for a tighter integration between Rancher and vCluster. At last year’s KubeCon in Paris, we took the first step to address this need by shipping our first Rancher integration as part of our commercial vCluster Platform. Over the past year, we have seen many of our customers benefit from this commercial integration but we also heard from many vCluster community users that they would love to see this integration as part of our open source offering. We believe that more virtual clusters in the world are a net benefit for everyone: happier developers, more efficient infra use, less wasted resources and less energy consumption leading to a reduced strain on the environment. Additionally, we realized that rebuilding our integration as a standalone Rancher UI extension and a lightweight controller for managing virtual clusters would be even easier to install and operate for Rancher admins. So, we decided to just do that and design a new vCluster integration for Rancher from scratch. Anyone operating Rancher can now offer self-service virtual clusters to their Rancher users by adding the vCluster integration to their Rancher platform. See the gif below for how this experience looks from a user’s perspective. How the new integration works Using the integration requires you to take the following steps: Have at least one functioning cluster running in Rancher which can serve as the host cluster for running virtual clusters in. Installing the two parts of the vCluster integration inside Rancher Our lightweight operator Our Rancher UI extension Granting users access to a project or namespace in Rancher, so they can deploy a virtual cluster in there. That’s it! Under the hood, when users deploy a virtual cluster via the UI extension, we’re deploying the regular vCluster Helm chart and the controller will automatically detect any virtual cluster (whether deployed via the UI, Helm, or otherwise) and connect them as clusters in Rancher, so users can manage these virtual clusters just like they would manage any other cluster in Rancher. Additionally, the controller takes care of permissions: any members of the Rancher project that the respective virtual cluster was deployed into will automatically be added to the new Rancher cluster by configuring Rancher’s Roles. And that’s it. No extra work, no credit card required. You have a completely open-source and free solution for self-service virtual clusters in Rancher. As lightweight and easy as self-service namespaces in Rancher but as powerful as provisioning separate clusters for your users. Next Steps In the long run, we plan to migrate users of our previous commercial Rancher integration to the new OSS integration but for now, there are a few limitations that still need to be worked on in the OSS integration in order to achieve feature parity. One important piece is the sync between projects and project permissions in Rancher and projects and project permissions in the vCluster Platform. Another one is the sync of users and SSO credentials. We’re actively working on these features. Subscribe to our changelog and you’ll be the first to know when we’ll have all of this ready for you to use. Please note that deploying both plugins at the same time is not supported, as they are not compatible with each other. For further installation instructions see the following: vcluster-rancher-extension-ui vcluster-rancher-operator
Announcing vNode: Stronger Multi-Tenancy for Kubernetes with Node-Level Isolation
We’re excited to introduce vNode, a new product from LoftLabs that brings secure workload isolation to the node layer of Kubernetes. vNode enables platform engineering teams to enforce strict multi-tenancy inside shared Kubernetes clusters—without the cost or complexity of provisioning separate physical nodes. Why We Built vNode Most teams face a painful trade-off in Kubernetes multi-tenancy: share nodes and risk security vulnerabilities, or isolate workloads on separate nodes and waste resources. vNode breaks this trade-off by introducing lightweight virtual nodes that provide strong isolation without performance penalties or infrastructure sprawl. With vNode, teams can: Enforce tenant isolation at the node level, preventing noisy neighbor issues and improving security. Run privileged workloads safely—like Docker-in-Docker or Kubernetes control planes—inside shared infrastructure. Meet compliance needs by eliminating shared kernel risks. Avoid the overhead of VMs, syscall translation, or re-architecting their Kubernetes environments. How It Works vNode introduces a lightweight runtime that runs alongside containerd, using Linux user namespaces to isolate workloads. Each physical node is partitioned into multiple secure virtual nodes, providing stronger multi-tenancy inside shared clusters. It integrates seamlessly with any Kubernetes distribution that uses containerd (on Linux kernel 6.1+). Better Together: vNode + vCluster vNode complements our existing product, vCluster, by adding node-level isolation to virtual clusters. Together, they provide full-stack multi-tenancy—isolating both control planes and workloads within the same shared cluster. Join the Private Beta We’re currently rolling out vNode through a private beta. Be among the first to try it out. Sign up for early access at vNode.com
vCluster v0.24 - Snapshot & Restore and Sleep Mode improvements
Spring is just around the corner and you know what that means, KubeCon Europe is almost here! We have some exciting features to announce, and not just in vCluster. Swing by our booth to find out all the details, but here’s a little preview of what just shipped a few weeks before KubeCon London: Snapshot & Restore Backing up and restoring a virtual cluster has been possible for some time with Velero but that solution has it’s drawbacks. The restore paradigm doesn’t work seamlessly while the virtual cluster pod is running, it can be slow, and has limited use cases. Several users have requested a more full-featured backup process, and today we are happy to announce a solution. Snapshots are now a built-in feature of vCluster. These are quick, lightweight bundles created directly through the vCluster CLI, with a variety of options to make your life easier. Taking a snapshot and exporting it directly to an OCI compliant registry, for example, is done with a simple command: vcluster snapshot my-vcluster oci://ghcr.io/my-user/example-repo:snap-one Each snapshot includes the etcd database content, the helm release, and the vcluster.yaml configuration. Restoring is just as easy: vcluster restore my-vcluster oci://ghcr.io/my-user/my-repo:my-tag You can use snapshot and restore for the the following use cases: Restoring in-place, i.e. reverting a virtual cluster to a previous state based on a previous snapshot Migrating between config options that don’t have a direct migration path such as changing the backing store of a virtual cluster Making several copies of the same virtual cluster using the same snapshot (almost as if you were using a snapshot to package a virtual cluster for distribution) Migrating a virtual cluster to from one Kubernetes cluster to another one (please note that snapshots currently don’t support PV migrations) We anticipate the flexibility this feature provides will create many unique implementations. As of today this feature supports saving to an OCI registry, S3, or to it’s local storage with the container protocol. Please see the docs for more information, including current limitations, and further configuration. Sleep Mode Parity In our v0.22 release we announced vCluster-Native Sleep Mode. This feature allowed virtual clusters which were deployed outside of our platform, and without an agent, the ability to take advantage of sleep mechanisms by putting workloads to sleep while leaving the control plane running. With v0.24 and our upcoming Platform 4.3 release, this vCluster-Native Sleep Mode will be combined with our original Platform-based Sleep Mode in order to make a single unified solution which contains the best of both worlds. Once your virtual cluster is configured with Sleep Mode (see example below), it and the Platform will work together to shut down as many components as possible. Without an agent it will shut down only workloads, and with an agent it will also sleep the control plane. sleepMode: enabled: true autoSleep: afterInactivity: 1h exclude: selector: labels: dont: sleep This new feature will once again be known simply as Sleep Mode. Please see the docs for instructions on how to migrate your configuration, and how to upgrade both the Platform (once it is available in the Platform 4.3 release) and your virtual cluster. Other Changes Notable Improvements The Export Kube-Config feature has been improved to allow for more than one secret to be added. This change also aims to remove confusion, as the config itself clarifies how both the default secret and the additionalSecrets are set. In the below example you can see the default secret and multiple additional secrets being configured. The original exportKubeConfig.secret option is now deprecated, and will be removed in a future version. exportKubeConfig: context: domain-context server: https://domain.org:443 additionalSecrets: - namespace: alternative-namespace name: vc-alternative-secret context: alternative-context server: https://domain.org:443 - namespace: ... Fixes & Other Updates In the announcement for version v0.20 we noted that **ghcr.io/loft-sh/vcluster-pro** would be the default image in vCluster. To avoid confusion the loft-sh/vcluster image is now deprecated, and in future versions will no long be updated. Instead the loft-sh/vcluster-oss will continue to be built for public use and can be used as a replacement. As stated in our v0.23 announcement, deploying multiple virtual clusters per-namespace is now deprecated. When using v0.24, the virtual cluster pod will not start unless you enable the reuseNamespace option in your vCluster’s config.yaml. This functionality will soon be removed entirely. For a list of additional fixes and smaller changes, please refer to the release notes.
vCluster v0.23 - Expanded fromHost resource syncing and support for Kubernetes v1.32
The v0.23 release of vCluster introduces powerful new capabilities for syncing resources from the host cluster, including secrets, configMaps, and namespaced custom resources. Additionally, this update brings support for Kubernetes v1.32 and several key improvements for stability and usability. Let’s dive in! fromHost resource syncing Perhaps the most integral feature of vCluster is syncing resources to and from the host cluster. Our team has been focusing on making consistent progress in this area to provide functionality that will help our users the most. With v0.23 three new resources can be synced from the host cluster to the virtual cluster: secrets, configMaps, and namespaced custom resources. While cluster-scoped custom resources can already be synced today, they provided limited functionality. The ability to create and sync any namespaced custom resource opens up many new use cases and integrations. Similarly, the ability to sync secrets or configMaps from separate host namespaces into your virtual cluster allows expanded configuration and deployment options. See the example vcluster.yaml below showing how to sync: Secrets from namespace foo in the host cluster to namespace bar in the virtual cluster Custom resources example.demo.vcluster.com from host namespace to the default namespace in the virtual cluster sync: fromHost: secrets: enabled: true mappings: byName: # syncs all Secrets from "foo" namespace # to the "bar" namespace in a virtual cluster. Secret names are unchanged. "foo/*": "bar/*" customResources: example.demo.vcluster.com: enabled: true mappings: byName: # syncs all `example` objects from vCluster host namespace # to the "default" namespace in the virtual cluster "": "default" For more information, see the docs on ConfigMaps, Secrets, and Custom Resources. Support for Kubernetes v1.32 In this release, we’ve also added support for Kubernetes v1.32, enabling users to take advantage of the latest enhancements, security updates, and performance improvements in the upstream Kubernetes release. However, please be aware that this update does not extend to k0s. Notable Improvements Intermittent connection interruptions between virtual clusters and their platform will not disrupt the usage of pro features, and will be handled more gracefully. PriorityClass can now be automatically applied to workloads. Other Changes Please note that any Node objects created on the virtual cluster will no longer be automatically removed, and must be manually cleaned up. Deploying multiple virtual clusters per-namespace is now deprecated. When it is detected, the following will occur: When using v0.23 a warning will be logged When using v0.24 the virtual cluster pod will not start unless you enable the reuseNamespace option in your vCluster’s config.yaml In v0.25 this functionality will no longer be supported, the reuseNamespace option will be removed, and the virtual cluster pod will no longer start. For a list of additional fixes and smaller changes, please refer to the release notes.
vCluster v0.22 - Native Sleep Mode and Cert-Manager Integration
The v0.22 release of vCluster provides two new exciting Pro features, native sleep mode and cert-manager integration, along with a host of OSS updates. Let’s dive in. vCluster Native Sleep Mode vCluster’s Sleep Mode is one of our most popular features. However, with the advent and adoption of externally deployed virtual clusters, users are often finding themselves in an environment without an agent deployed to each cluster. The current sleep mode feature requires a platform connection and an agent which is a constraint that this new version of the sleep mode tries to address. With vCluster native sleep mode, the virtual cluster itself can suspend workloads without shutting down the vCluster control plane, which can be used with or without the agent being present. Similar to the current sleep mode it uses two different metrics, user activity and ingress, to determine if workloads should be suspended. It can also be set to sleep and wake on a regular schedule. Here is an example of how to configure the new vCluster native sleep mode in vcluster.yaml: experimental: sleepMode: enabled: true autoSleep: afterInactivity: 1h For more configuration options, see the docs. As you can see in this code snippet, the new sleep mode is currently considered experimental because it doesn’t provide the same capabilities the platform-based sleep mode provides. Most notably, the ability to not just put workloads to sleep but also put the control plane to sleep is missing in this new sleep mode. We are working on adding all missing capabilities in the next couple of months. Once the new sleep mode reaches parity with the previous platform-based sleep mode, the vCluster native sleep mode will become the primary sleep feature of vCluster and the Platform. Keep an eye out for upcoming announcements in future releases, and in the meantime, we’d love to hear your feedback! Cert-Manager integration Integrations make vCluster easier to use and more convenient to configure as you scale up your environments. Taking advantage of the shared platform stack of controllers in an underlying host-cluster will save you overhead and hassle. One of the more ubiquitous use cases is cert-manager. While syncing these resources was already possible using our Custom Resource syncing, we’ve turned this into an integration to provide a true easy-button experience. With the following configuration, Certificate resources generated on your virtual cluster will get signed by the cert-manager living in your host cluster: integrations: certManager: enabled: true For a more advanced walkthrough see our recent blog post, or for more details and advanced configuration see the docs pages. Fixes & Other Changes For a list of additional fixes and smaller changes, please refer to the release notes.
Platform v4.2 - Cost Savings Dashboard
One of the key benefits of using vCluster compared to spinning up yet another full-blown Kubernetes cluster is that a virtual cluster is significantly cheaper than a traditional cluster that you get in EKS, AKS or GKE. But how much exactly are you saving when you’re using virtual clusters instead of traditional clusters? That question has been challenging for many of our users, and we’ve noticed some of you creating your own cost dashboards to find the answer. So, we decided to work on a built-in dashboard that ships with vCluster Platform and allows anyone to view this data with ease. With vCluster Platform v4.2, the platform tracks all the necessary data via Prometheus and shows you a pretty accurate view of the cost savings you’re likely going to see on your cloud provider bill. Here is how it looks in our newly released dashboard view: In this new dashboard, you will see graphs for each of the ways that vCluster helps you reduce cost: Fewer Clusters: Every public cloud cluster roughly costs $900 per year for the cluster fee which is the fee you pay just to just turn the cluster on and without including any compute for your worker nodes. In comparison, a virtual cluster often costs less than $90 to run using average public cloud CPU and memory pricing. Sleep Mode: When virtual clusters run idle, you can configure them to go to sleep after a certain period of inactivity. And because a virtual cluster is just a container, it can also automatically spin up again in seconds once you start using it later on. This is unthinkable with traditional clusters and every minute your virtual cluster is sleeping, you’re saving a lot of compute cost or free up capacity for other workloads. [COMING SOON] Shared Platform Stack: Virtual clusters running on the same underlying host cluster can be configured to share controllers from the underlying cluster which means that instead of having to run platform stack tools such as nginx-ingress, cert-manager, OPA, Prometheus and many others 300 times for 300 clusters, you can just run them ONCE and then spin up 300 virtual clusters that all use these shared components. Enable Cost Dashboard If you are using a self-hosted deployment of vCluster Platform and you upgrade to v4.2 or higher, this feature will automatically be enabled for you. You can use the Platform’s helm values to disable this feature or configure additional options: config: costControl: enabled: true # default enabled settings: averageCPUPricePerNode: price: 31 timePeriod: Monthly For more details, please refer to the documentation. Limitations While this initial release of the dashboard will already be very valuable for many of our customers who are curious about the ROI of their investment in vCluster, we are still working on a few topics that will most likely be addressed in future releases: In vCluster Cloud, the dashboard is currently not available but we plan to ship support for our cloud offering in early 2025. Tracking savings from shared platform stack components is currently not available but we’re working hard on making it available in the next 2-3 months. Cloud pricing is currently defined with a simple price per CPU/memory variable in the dashboard under “Cost Settings” but depending on the feedback and popularity of this dashboard, we might invest in automatically retrieving pricing details via your cloud provider’s pricing APIs to run even more accurate calculations. For now, the dashboard is aimed at public cloud use cases but we are already thinking about ways to make it more useful for private cloud and bare metal deployments.
vCluster Cloud + vCluster Platform v4.1 with External Database Connector
We’re excited to introduce vCluster Cloud, our managed solution to make adopting and exploring vCluster Platform easier than ever, and the new External Database Connector in Platform v4.1, which automates secure, scalable database provisioning for your virtual clusters. Dive in and experience these powerful updates today. Introducing vCluster Cloud Virtual clusters have been adopted by companies of all sizes and while our enterprise users love the fact that vCluster as well as our Platform are optimized to be self-hosted, setting up and running the Platform in particular can require some effort. To make it easier for anyone to explore and adopt our Platform, we are launching vCluster Cloud, our managed offering for anyone that would like us to host and manage the Platform for them. Try out vCluster Cloud today if you’re interested. While vCluster Cloud is still in beta and not recommended for mission-critical production workloads, it is a great option for you if you want to: Explore the Platform without having to set it up yourself Activate Pro features for a virtual cluster without having to set up the Platform to receive a license key Run a proof-of-concept project with the Platform without any setup overhead Experiment with new releases before you upgrade your self-hosted production instance of the Platform Test configuration changes in a sandbox-like environment that can be easily deleted and recreated within less than a minute After this beta release today, we will be working hard to make vCluster Cloud fully production-ready because we know that especially small and mid-size organizations might prefer to leverage this managed offering instead of having the operational burden to run the Platform themselves. We might even go as far in the future to even offer fully hosted virtual clusters where we manage the entire control plane and its state and only your workloads will run in your own cloud or on-premise infrastructure. If this might be of interest to you, please contact us via sales@loft.sh Platform v4.1 - External Database Connector Virtual clusters always require a backing store. If you explore vCluster with the default settings and in the most lightweight form possible, your virtual clusters’ data is stored in a SQLite database which is a single-file database that is typically stored in a persistent volume mounted into your vCluster pod. However, for users that want more scalable and resilient backing stores, vCluster also supports: etcd (deployed outside of your virtual cluster and self-managed) Embedded etcd (runs an etcd cluster as part of your virtual cluster pods fully managed by vCluster) External databases (MySQL or Postgres databases) The option to connect to an external database is particularly exciting for many of our vCluster power users because most organizations have well-established options for running and maintaining relational databases at scale. And if you are running in the public cloud, you can even offload database HA clustering as well as backup and recovery processes to your cloud provider, e.g. using solutions such as AWS RDS. BEFORE: Manual Database Provisioning So far, in order to use external databases for your virtual clusters, you will need to: Create a database Create a database user and password Configure the virtual cluster to use this database and the respective credentials using the vcluster.yaml as shown in the example below: controlPlane: backingStore: database: external: enabled: true dataSource: "mysql://username:password@hostname:5432/vcluster-1" Doing this manually for a few virtual clusters may be possible but it is not a great solution because of the following risks: Manual provisioning is time-consuming and prone to human errors Database credentials have to be configured separately for each virtual cluster and live inside the workload clusters making these credentials more vulnerable to not be handled properly and potentially being exposed or leaked Cleaning up databases and credentials for deleted virtual clusters is entirely manual and will often be forgotten Rotating credentials becomes tedious and likely something users will not want to do frequently AFTER: Automated Database Provisioning via Connector In order to address the problems of manual provisioning of external databases for virtual clusters, we built a Platform feature called External Database Connector. Here is how to use this feature: In the Platform, create a Database Connector by specifying your database server and credentials to access it (this information is stored in a regular Kubernetes secret and can be provisioned and managed with your preferred Kubernetes secret store, e.g. Vault). For each virtual cluster, configure this connector as the backing store as shown in the example below: controlPlane: backingStore: database: external: enabled: true connector: "my-connector" Once the virtual cluster starts, the following will now happen: The virtual cluster will connect to the Platform. The Platform will create a separate database (inside your database server) for each virtual cluster. The Platform will create a non-privileged user for this database. The Platform will relay the username and password to the virtual cluster, so it can access the database as a backing store. This approach has the following benefits over manual database provisioning: Fully automated database and user provisioning for each virtual cluster Central credentials handling and in-memory, on-demand transfer of credentials from the Platform to virtual clusters drastically reducing the risk to leak credentials Automatic cleanup of databases and credentials upon deletion of virtual clusters Soon: Automated options for rotating credentials to make them short-lived If you want to learn more about External Database Connectors, view the documentation.