Latest changes
Platform v4.2 - Cost Savings Dashboard
One of the key benefits of using vCluster compared to spinning up yet another full-blown Kubernetes cluster is that a virtual cluster is significantly cheaper than a traditional cluster that you get in EKS, AKS or GKE. But how much exactly are you saving when you’re using virtual clusters instead of traditional clusters? That question has been challenging for many of our users, and we’ve noticed some of you creating your own cost dashboards to find the answer. So, we decided to work on a built-in dashboard that ships with vCluster Platform and allows anyone to view this data with ease. With vCluster Platform v4.2, the platform tracks all the necessary data via Prometheus and shows you a pretty accurate view of the cost savings you’re likely going to see on your cloud provider bill. Here is how it looks in our newly released dashboard view: In this new dashboard, you will see graphs for each of the ways that vCluster helps you reduce cost: Fewer Clusters: Every public cloud cluster roughly costs $900 per year for the cluster fee which is the fee you pay just to just turn the cluster on and without including any compute for your worker nodes. In comparison, a virtual cluster often costs less than $90 to run using average public cloud CPU and memory pricing. Sleep Mode: When virtual clusters run idle, you can configure them to go to sleep after a certain period of inactivity. And because a virtual cluster is just a container, it can also automatically spin up again in seconds once you start using it later on. This is unthinkable with traditional clusters and every minute your virtual cluster is sleeping, you’re saving a lot of compute cost or free up capacity for other workloads. [COMING SOON] Shared Platform Stack: Virtual clusters running on the same underlying host cluster can be configured to share controllers from the underlying cluster which means that instead of having to run platform stack tools such as nginx-ingress, cert-manager, OPA, Prometheus and many others 300 times for 300 clusters, you can just run them ONCE and then spin up 300 virtual clusters that all use these shared components. Enable Cost Dashboard If you are using a self-hosted deployment of vCluster Platform and you upgrade to v4.2 or higher, this feature will automatically be enabled for you. You can use the Platform’s helm values to disable this feature or configure additional options: config: costControl: enabled: true # default enabled settings: averageCPUPricePerNode: price: 31 timePeriod: Monthly For more details, please refer to the documentation. Limitations While this initial release of the dashboard will already be very valuable for many of our customers who are curious about the ROI of their investment in vCluster, we are still working on a few topics that will most likely be addressed in future releases: In vCluster Cloud, the dashboard is currently not available but we plan to ship support for our cloud offering in early 2025. Tracking savings from shared platform stack components is currently not available but we’re working hard on making it available in the next 2-3 months. Cloud pricing is currently defined with a simple price per CPU/memory variable in the dashboard under “Cost Settings” but depending on the feedback and popularity of this dashboard, we might invest in automatically retrieving pricing details via your cloud provider’s pricing APIs to run even more accurate calculations. For now, the dashboard is aimed at public cloud use cases but we are already thinking about ways to make it more useful for private cloud and bare metal deployments.
vCluster Cloud + vCluster Platform v4.1 with External Database Connector
We’re excited to introduce vCluster Cloud, our managed solution to make adopting and exploring vCluster Platform easier than ever, and the new External Database Connector in Platform v4.1, which automates secure, scalable database provisioning for your virtual clusters. Dive in and experience these powerful updates today. Introducing vCluster Cloud Virtual clusters have been adopted by companies of all sizes and while our enterprise users love the fact that vCluster as well as our Platform are optimized to be self-hosted, setting up and running the Platform in particular can require some effort. To make it easier for anyone to explore and adopt our Platform, we are launching vCluster Cloud, our managed offering for anyone that would like us to host and manage the Platform for them. Try out vCluster Cloud today if you’re interested. While vCluster Cloud is still in beta and not recommended for mission-critical production workloads, it is a great option for you if you want to: Explore the Platform without having to set it up yourself Activate Pro features for a virtual cluster without having to set up the Platform to receive a license key Run a proof-of-concept project with the Platform without any setup overhead Experiment with new releases before you upgrade your self-hosted production instance of the Platform Test configuration changes in a sandbox-like environment that can be easily deleted and recreated within less than a minute After this beta release today, we will be working hard to make vCluster Cloud fully production-ready because we know that especially small and mid-size organizations might prefer to leverage this managed offering instead of having the operational burden to run the Platform themselves. We might even go as far in the future to even offer fully hosted virtual clusters where we manage the entire control plane and its state and only your workloads will run in your own cloud or on-premise infrastructure. If this might be of interest to you, please contact us via sales@loft.sh Platform v4.1 - External Database Connector Virtual clusters always require a backing store. If you explore vCluster with the default settings and in the most lightweight form possible, your virtual clusters’ data is stored in a SQLite database which is a single-file database that is typically stored in a persistent volume mounted into your vCluster pod. However, for users that want more scalable and resilient backing stores, vCluster also supports: etcd (deployed outside of your virtual cluster and self-managed) Embedded etcd (runs an etcd cluster as part of your virtual cluster pods fully managed by vCluster) External databases (MySQL or Postgres databases) The option to connect to an external database is particularly exciting for many of our vCluster power users because most organizations have well-established options for running and maintaining relational databases at scale. And if you are running in the public cloud, you can even offload database HA clustering as well as backup and recovery processes to your cloud provider, e.g. using solutions such as AWS RDS. BEFORE: Manual Database Provisioning So far, in order to use external databases for your virtual clusters, you will need to: Create a database Create a database user and password Configure the virtual cluster to use this database and the respective credentials using the vcluster.yaml as shown in the example below: controlPlane: backingStore: database: external: enabled: true dataSource: "mysql://username:password@hostname:5432/vcluster-1" Doing this manually for a few virtual clusters may be possible but it is not a great solution because of the following risks: Manual provisioning is time-consuming and prone to human errors Database credentials have to be configured separately for each virtual cluster and live inside the workload clusters making these credentials more vulnerable to not be handled properly and potentially being exposed or leaked Cleaning up databases and credentials for deleted virtual clusters is entirely manual and will often be forgotten Rotating credentials becomes tedious and likely something users will not want to do frequently AFTER: Automated Database Provisioning via Connector In order to address the problems of manual provisioning of external databases for virtual clusters, we built a Platform feature called External Database Connector. Here is how to use this feature: In the Platform, create a Database Connector by specifying your database server and credentials to access it (this information is stored in a regular Kubernetes secret and can be provisioned and managed with your preferred Kubernetes secret store, e.g. Vault). For each virtual cluster, configure this connector as the backing store as shown in the example below: controlPlane: backingStore: database: external: enabled: true connector: "my-connector" Once the virtual cluster starts, the following will now happen: The virtual cluster will connect to the Platform. The Platform will create a separate database (inside your database server) for each virtual cluster. The Platform will create a non-privileged user for this database. The Platform will relay the username and password to the virtual cluster, so it can access the database as a backing store. This approach has the following benefits over manual database provisioning: Fully automated database and user provisioning for each virtual cluster Central credentials handling and in-memory, on-demand transfer of credentials from the Platform to virtual clusters drastically reducing the risk to leak credentials Automatic cleanup of databases and credentials upon deletion of virtual clusters Soon: Automated options for rotating credentials to make them short-lived If you want to learn more about External Database Connectors, view the documentation.
vCluster v0.21 – Custom Resource Syncing, Bi-Directional Sync, FIPS Compliance, & More
Our latest update to vCluster brings powerful new capabilities, including customizable resource syncing, bi-directional sync support, and native integrations with key Kubernetes tools. Additionally, v0.21 introduces a FIPS-compliant edition to meet the stringent security needs of the public sector. Custom Resource Syncing You can now sync any custom resources in two simple steps: Get the full name of the custom resources definition (CRD). For a full list of available CRDs in your cluster, run kubectl get crds Specify the enabled: true for the resource under sync.toHost.customResources.[resourceName] For syncing the Certificate resource of Cert Manager for example, you would specify the sync using the CRD name certificates.cert-manager.io as shown in the example below: sync: toHost: customResources: certificates.cert-manager.io: enabled: true Enabling the sync for custom resources in the toHost section means that: The CRD is imported from the host cluster into the virtual cluster. All custom resources of this kind will be synced from the virtual cluster to the host cluster. Please note that the fromHost custom resource sync works similarly: First, the CRD will be synced into the virtual cluster, then the sync of custom resources from the host is started. However, please note that fromHost custom resources is limited to cluster-wide CRDs and the resources synced into the virtual cluster will be readonly within the virtual cluster. For additional details, please refer to the documentation for toHost.customResources and fromHost.customResources. Sync Patches You can now patch resources during the sync process. This feature works for any resource that vCluster is syncing, including the ones you specify via the newly introduced Custom Resource Syncing. There are two types of patches: Patches that allow you to sync references to other objects (e.g. secretRef) Custom patches that you can define using JavaScript expressions Patching References To Other Objects Reference patches allow you to specify paths within objects that contain references to other objects. For example, let’s take the Certificates resource in Cert Manager: apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: example-com namespace: sandbox spec: secretName: example-com-tls If you create this resource, you will have to specify secretName which defines where Cert Manager will store the secret that it will write the certificate into once its provisioned. If you use the new Custom Resource Syncing for this Certificate resource to reuse a shared Cert Manager that is running on the host cluster, vCluster will sync the Certificate resource down to the host cluster. However, given that multiple namespaces inside the virtual cluster sync to the same namespace in the host cluster, conflicts would arise pretty quickly and on top of that it’s almost impossible for vCluster to know that spec.secretName in the above example references a secret. In order to solve this problem, you can now tell vCluster where references can be found in a custom resource and vCluster will enable syncing for these resource while automatically patching the names of resources accordingly - as it would do with any other resource during syncing. See the example configuration below to tell vCluster that the field spec.secretName contains a reference to a Kubernetes secret: sync: toHost: customResources: certificates.cert-manager.io: enabled: true patches: - path: spec.secretName reference: apiVersion: v1 kind: Secret Custom Patching with JavaScript Expressions With custom patches you can now introduce JavaScript expressions that modify resources during the sync process on-the-fly. The current value in the specified field is contained in the variable value and the JavaScript expression is supposed to return the value you would like the field to have once it’s synced. The following example, would modify all strings within the spec.dnsNames array to add the www. prefix if not already part of the string. Additionally, you can use the reverseExpression field to make sure vCluster knows how to revert the change. Specifying this field allows you to retain bi-directional syncing, another new feature we are introducing in v0.21. sync: toHost: customResources: certificates.cert-manager.io: enabled: true patches: - path: spec.dnsNames[*] expression: "value.startsWith('www.') ? value : `www.${value}`" # Optional. Specifies how to sync back changes. # If empty, will disable bi-directional sync for this field. reverseExpression: "value.startsWith('www.') ? value.slice(4) : value" Bi-Directional Sync One of the most exiting features of v0.21 is that we are introducing bi-directional syncing. So far, vCluster sync has been one-directional. That means that if a resource was synced from the virtual cluster to the host cluster and a controller then modified the resource in the host cluster, these changes would be overwritten and reset by the syncer because it considers the original resource (i.e. the resource in the virtual cluster) as the source of truth. With bi-directional syncing, vCluster keeps track of any changes to each individual field and is able to sync changes back and forth. This feature is not yet supported for all resources and fields but we aim to expand it to all resources and fields very soon. Currently, bi-directional sync is supported for: All fields on custom resources Annotations and labels on any resource Certain fields on the following core resources: Pod, ServiceAccount ConfigMap, Secret Service, Ingress, PriorityClass PersistentVolumeClaim, PersistentVolume, StorageClass Please refer to the documentation for a full list of all fields that support bi-directional sync. Integrations While Custom Resource Sync enables you to sync any resource easily, some resources tend to be very complex and we decided to start an effort to build native integrations for major tools used in the Kubernetes ecosystem. Making these integrations part of vCluster will allow you to enable them with a simple true/false toggle without having to configure a lot of YAML. Additionally, this approach will give users confidence that vCluster maintainers will have their back if these controllers and resources evolve over time. The first two integrations available as of today are: External Secrets Operator KubeVirt External Secrets Operator Please find an example of how to enable the External Secrets Operator integration below and refer to the documentation for additional details. integrations: externalSecrets: enabled: true sync: externalSecrets: enabled: true stores: enabled: true clusterStores: enabled: true KubeVirt Please find an example of how to enable the KubeVirt integration below and refer to the documentation for additional details. integrations: kubeVirt: enabled: true sync: dataVolumes: enabled: true FIPS Compliance Given the huge interest from the public sector, we are committed to work on security and compliance features that enable these users to confidently build on vCluster for mission-critical use cases. As part of this effort, vCluster v0.21 is now available as a FIPS 140-2 compliant edition. The FIPS compliant binaries require a valid license key and are part of our commercial vCluster offering only. If you are interested in deploying FIPS compliant virtual clusters, please contact us via gov@loft.sh. Other Changes Notable Improvements Kubernetes v1.31 is now supported except for k0s Fixes & Other Changes For a list of additional fixes and smaller changes, please refer to the release notes.
vCluster Platform v4.0 - Externally deployed virtual clusters, a revamped UI, dynamic OIDC clients, & more
With this major release of the Platform, we’re making it easier than ever to add existing virtual clusters to the Platform without having to change the way you deploy them. With the Platform you can then enable pro features, add authentication via OIDC, inspect vCluster resources via the UI, and more. Externally deployed virtual clusters You can now add any virtual cluster to the platform, regardless of whether it was deployed by the platform or any external tool such as Helm, Argo, or Terraform. Many open source vCluster users have created hundreds of virtual clusters in their organizations with the automation tools they are most familiar with by just deploying the vCluster Helm chart. And many of these users have reached out to us asking how to use the Platform with these existing virtual clusters. However, adding them into the Platform was previously only possible with an import flow that would now enable the Platform’s controllers to attempt to manage the lifecycle of such a virtual cluster. This is problematic because the tool that originally deployed the virtual cluster also assumes that it is responsible for the lifecycle of this virtual cluster. Users would then be able to make changes in both tools and the tools would reconcile the virtual cluster to the desired state competing against each other for what the actual desired state is given that there is two sources of truth. This is particularly problematic for continuous deployment tools such as Argo CD. With this release, it is now possible to add an existing virtual cluster to the Platform without having the Platform try to take over the lifecycle management for the virtual cluster. The reason why users want to add their virtual clusters to the Platform even without lifecycle management is that this allows users to: Enable Pro features of vCluster (license key is distributed to the virtual cluster via the Platform) Give admins an overview of their fleet of virtual clusters across the organization View virtual clusters in the UI to get an overview of versions, enabled features, etc. Detect issues such as vCluster errors, outdated Kubernetes versions or upgrade issues at a glance Receive additional context and debugging information for errors and overall health status of each virtual cluster Inspect resources inside virtual clusters via the UI Give other users access to virtual clusters through the Platform which is especially useful if SSO Enable integrations provided by the Platform such as the Argo CD integration that can automatically add virtual clusters to Argo or the Vault integration that allows to use of Vault secrets in virtual clusters Adding a virtual cluster to the Platform is a non-invasive and easily reversible action. To add a virtual cluster to the Platform, run: vcluster platform add vcluster [name] To spare you to run this command every time you create a virtual cluster with the CLI, vCluster CLI will also auto-add any virtual clusters that you create via vcluster create as long as you are logged into the Platform. So log in, simply run vcluster login [PLATFORM_URL]. Please note that adding externally deployed virtual clusters to the Platform is only support for vCluster v0.20 and above. UI Revamp As part of this release, we drastically improved the UX by redesigning some key views in the product, including the creation and editing flow for virtual clusters as shown in the screenshots below. Virtual cluster create flow - adding externally deployed vcluster.yaml editing experience to reconfigure virtual clusters Inspecting virtual cluster resources Dynamic OIDC Clients The Platform was designed to allow users to connect other tools as OIDC clients, enabling authentication through the Platform. This feature is especially valuable for administrators, as it allows them to configure SSO centrally within the Platform, rather than setting it up individually for each tool. Many users have already leveraged this capability for applications like Argo, Harbor, and others. With Platform v4.0, the registration of OIDC clients has now been moved out of the Platform config and these OIDC clients are now configured in separate Kubernetes secrets. This makes it easier to register new applications without having to restart the Platform and to scope RBAC permissions to add and edit OIDC clients separately from the global Platform configuration. BREAKING: Project Prefix Configurability All project-scoped information and custom resources is stored in namespaces within the management cluster where the Platform runs. So far, all project namespaces have been prefix with loft-p-. With v4.0 of the Platform, the default prefix has now been changed to p- and you can now change this prefix with the config option projectNamespacePrefix within the Platform config. Please note that because the default value for projectNamespacePrefix is p-, existing users will need to explicitly set projectNamespacePrefix: "loft-p-" in their Platform config before upgrading from v3 to v4. More details in the migration guide. Other Changes Notable Improvements The Platform now requires you to set a vCluster version when creating a virtual cluster. Previously the version field could be empty and would auto-populate with the Platform’s default value. Now, the field will be prefilled in the UI but a value is required to be saved in the virtual cluster configuration making the versions each virtual cluster runs with more explicit and easier to manage over time. Spaces have been renamed to Namespaces to clarify that they are just namespaces that are managed by the Platform. The default namespace when installing the Platform is now vcluster-platform rather than loft but this can be adjusted and upgrading the Platform will not change the existing namespace. Loft CLI commands have now been ported into vCluster CLI and we urge everyone to start using vCluster CLI going forward. Fixes & Other Changes For a list of additional fixes and smaller changes, please refer to the release notes.
vCluster v0.20 GA
Major Changes Please read this section carefully as it may be breaking changes. New config format: vcluster.yaml This release introduces the new vcluster.yaml file which centralizes all the configuration options for vCluster and serves as the Helm values at the same time. This new configuration features a completely revamped format designed to enhance the user experience: Validation: We provide a JSON schema for vcluster.yaml, which is used by vCluster CLI and vCluster Platform UI now validate configurations before creating or upgrading virtual clusters. This schema has also been published to SchemaStore, so that most IDEs will recognize the vcluster.yaml file and provide autocomplete and validation directly in the IDE editor. Consolidated configuration: All configurations are centralized in the vcluster.yaml file, eliminating confusion previously caused by the mix of CLI flags, annotations, environment variables, and Helm values. Consistent grouping and naming: Fields in vcluster.yaml are logically grouped under topical categories, simplifying navigation and enhancing discoverability of related features. Docs alignment: Our documentation now mirrors the structure of vcluster.yaml, making it easier to cross-reference settings within the file and corresponding sections in the docs. Migrating to vcluster.yaml In order to make it easy to convert your old values.yaml (v0.19 and below) to the new vcluster.yaml format, you can run the new vcluster convert config command. For example, let's take these pre-v0.20 configuration values: # values.yaml sync: ingresses: enabled: true nodes: enabled: true fake-nodes: enabled: false syncer: replicas: 3 extraArgs: - --tls-san=my-vcluster.example.com Running vcluster convert config --distro k3s will generate the following vcluster.yaml: # vcluster.yaml sync: toHost: ingresses: enabled: true fromHost: ingressClasses: enabled: true nodes: enabled: true controlPlane: distro: k3s: enabled: true proxy: extraSANs: - my-vcluster.example.com statefulSet: highAvailability: replicas: 3 scheduling: podManagementPolicy: OrderedReady For more details on upgrading from older versions to v0.20, please read our configuration conversion guide. Unified Helm chart for simplified deployment We consolidated the distro-specific vCluster Helm charts (vcluster (k3s), vcluster-k8s, vcluster-k0s, and vcluster-eks) into a single, unified chart. This change is designed to simplify management and upgrading of virtual clusters: Single source: No more juggling multiple charts. The vcluster.yaml serves as the single source for all configuration in a unified Helm chart for all distros. Enhanced validation: We've introduced a JSON schema for the Helm values, ensuring that upgrades will only proceed if your configuration matches the expected format to reduce deployment errors. Customizable distributions: The new unified chart structure enables easier customization of Kubernetes distributions directly via the Helm chart values: # vcluster.yaml controlPlane: distro: k8s: enabled: true K8s distro now supports SQLite & external databases So far, virtual clusters running the vanilla k8s distro only supported etcd as storage backend which made this distro comparatively harder to operate than k3s. With vCluster v0.20, we’re introducing two new backing store options for vanilla k8s besides etcd: SQLite offers a more lightweight solution for data storage without the overhead associated with more complex choices like etcd or external databases. It is the new default for virtual clusters running the vanilla k8s distro. External Databases allow users to use any MySQL or Postgres compatible databases as backing stores for virtual clusters running the vanilla k8s distro. This especially useful for users who plan to outsource the backing store operations to managed database offerings such as AWS RDS or Azure Database. Note: Switching backing stores is currently not supported. In order to use this new backing store, you will need to deploy net new virtual clusters and migrate the data manually with backup and restore tooling such as Velero. Upgrading your configuration via vcluster convert config will explicitly write the previously used data store into your configuration to make sure upgrading an existing virtual cluster does not require changing the backing store. EKS distro has been discontinued Previously, vCluster offered the option to use EKS as a distro to run vCluster. However, this lead many users to believe they had to use the EKS distro to run vCluster on an EKS host cluster, which is not correct because any vCluster distro is able to run on an EKS host cluster. Given that the EKS distro did not provide any benefits beyond the vanilla k8s distro and introduced unnecessary confusion and maintenance effort, we decided to discontinue this distro. If you want to deploy virtual clusters on an EKS host cluster, we recommend using the k8s distro for vCluster going forward. If you plan on upgrading a virtual cluster that used EKS as a distro, please carefully read and follow this upgrade guide in the docs. Changes in defaults for vCluster There are several changes in the default configuration of a vCluster that are important for any users upgrading to v0.20+ or deploying net new clusters. Default distro changed from k3s to vanilla k8s We changed the default distribution for the vCluster control plane from K3s to K8s. This is the least opinionated option, offering greater flexibility and compatibility: Flexibility: More customization and scalability options, catering to a broader range of deployment needs. Compatibility: In addition to embedded and external etcd, you can now use various storage backends including SQLite, Postgres, and MySQL. This addition addresses previous challenges with using K8s for smaller virtual clusters. Upgrade Notes: Switching distributions is not supported, so in order to use this new default, you will need to deploy net new virtual clusters. Default image vcluster-pro We've updated the default image repository for vCluster to ghcr.io/loft-sh/vcluster-pro. This change allows users to seamlessly test and adopt vCluster Pro features without having to switch images from OSS to Pro. The Pro features are integrated into the Pro image but remain inactive by default to ensure that your experience remains exactly the same as with the OSS image. Upgrade Notes: When upgrading from previous versions, the image will automatically be updated to start to pull from the new repository. For users who prefer to continue using the open-source image, simply adjust your vcluster.yaml configuration to set the repository to loft-sh/vcluster-oss. See the docs for details. New Default Scheduling of Control Plane Pod: Parallel We’ve updated the scheduling rule of the control plane from OrderedReady to Parallel. Since vCluster typically runs as a StatefulSet, this setting cannot be changed after the virtual cluster been deployed. Increased Resource Requests We increased the default resource requests for vCluster including increasing: Ephemeral storage from 200Mi to 400Mi (to ensure that SQLite powered virtual clusters have enough space to store data without running out of storage space when they are used over a prolonged period of time) CPU from 3m to 20m Memory from 16Mi to 64Mi These changes are minimal and won’t have any significant impact on the footprint of a virtual cluster. Disabled Node Syncing for Kind Clusters When deploying virtual clusters with vCluster CLI, there is no automatic enabling of syncing real nodes for Kind clusters anymore. Upgrade Notes: If you want to continue to enable this syncing, then you will need to this configuration to your vcluster.yaml : sync: fromHost: nodes: enabled: true controlPlane: service: spec: type: NodePort Behavior Changes CLI Updates There have been significant CLI changes as the above changes have required refactoring how the CLI worked in some areas. Besides the above changes, we merged the overlapping commands found in loft and vcluster pro. The full summary of CLI changes can be found in our docs at the following sites: General List of CLI Changes - Listing out what’s new, what’s been renamed or dropped. Guide using vcluster convert to convert values.yaml files for pre-v0.20 virtual clusters to the updated vcluster.yaml to be used in upgrading to a v0.20+ vCluster Reference guide of loft CLI commands to new vcluster commands Ingress syncing behavior has changed Prior to v0.20, when you enabled syncing Ingresses from the virtual to the host cluster, it would also automatically sync all IngressClasses from the host cluster. However, this required a cluster role which some vCluster users don’t have. We’ve now decoupled these syncing behaviors so you can individually enable syncing Ingresses as well as IngressClasses separately. sync: toHost: ingresses: enabled: true fromHost: ingressClasses: enabled: true Updated CAPI Provider Support Our Cluster API (CAPI) provider has been updated with a new version (v0.2.0) that supports the new vcluster.yaml format.
Platform v4.0.0-beta
Announcing vCluster Platform v4.0.0-beta View the full changelog here Highlights Deploy vCluster your way Deploy vCluster with your existing tools like Argo CD without requiring a Platform Agent to be installed in the host cluster. Externally deployed instances will now connect and register directly with the Platform after running the vCluster CLI command: vcluster add vcluster VCLUSTER_NAME Alternatively, configure the Platform secret in the vcluster.yaml configuration file: external: platform: apiKey: secretName: "vcluster-platform-api-key" namespace: "" # empty defaults to the Helm release namespace The Platform now supports multiple vCluster deployment types: Deployed by Platform, managed by Platform. Deployed by Helm, managed by Platform with Platform Agent on host cluster. Deployed by Helm, managed by Platform without a Platform Agent on host cluster. Support for vCluster v0.20 Now you can use the latest vCluster version v0.20.0-beta together with the Platform v4.0.0-beta capabilities and activate vCluster Pro features. Migrating vCluster from v0.19 to v0.20 The Platform automatically attempts to convert existing vCluster v0.19 values to the new v0.20 vcluster.yaml configuration file when upgrading it via the UI. This is in addition to the vCluster v0.20 CLI command you can run to convert pre-v0.19 values: vcluster convert config --distro k3s -f VALUES_FILE > vcluster.yaml Redesigned vCluster UI editor The new vCluster UI editor brings together configuration, cluster resource visibility and audit logs into one full page view. vCluster v0.20 instances display a new vcluster.yaml viewer and editor to make it easier to configure with validation and auto-complete. "Spaces" has been renamed to "Host Namespaces" for clarity, however, functionality remains the same as Platform v3.4 Other Changes vCluster v0.20.x is now the default version when creating virtual clusters via the Platform. Offline virtual clusters without an Agent on the host cluster are automatically deregistered and removed from the Platform after 24 hours of being disconnected from the Platform. Added a status filter to the Namespaces product page, formally called "Spaces". Breaking Changes Project namespaces: The default namespace prefix changed from loft-p- to just p-. Note: Existing Platform users need to explicitly set this configuration to projectNamespacePrefix: loft-p- in the Platform configuration when upgrading or re-installing from pre-v4 to v4 to ensure the existing namespace prefix is maintained. Isolated Control Plane: Isolated Control plane configuration moved from the Platform to the vcluster.yaml configuration file under experimental.isolatedControlPlane. Spaces: Existing users of the Loft Spaces product need to use the vCluster v0.20 CLI in conjunction with this Platform v4.0.0-beta release. Removed APIs: virtualclusters.cluster.loft.sh and spaces.cluster.loft.sh Externally deployed: Externally deployed virtual clusters now have a spec.external boolean field on the VirtualClusterInstance CRD instead of the previous loft.sh/skip-helm-deploy annotation. Deprecations Loft CLI: The Loft CLI is now deprecated. The majority of commands have been migrated to the vCluster v0.20 CLI. Auto-import: Automatically importing via annotation is no longer supported. Virtual clusters can be automatically imported by configuring the external.platform.apiKey.secretName or by creating them via the vCluster CLI while logged into the platform vcluster create VCLUSTER_NAME --driver platform. Upgrading Ensure that you have upgraded first to v3 before attempting to upgrade to v4 Existing virtual clusters cannot have their vCluster version modified via the UI at the moment. This will be enabled in a subsequent release. However, upgrading from v0.19 to v0.20 is currently possible via the vCluster list page within the "vCluster Version" column. Upgrading from Platform v3 to v4 is only possible with vCluster v0.20 CLI. The UI will support upgrading in a future release. View upgrade guide View the full changelog here
Platform 3.4.5
Changes feature: Add support for istio ingress gateway sleep mode activity tracking (by @lizardruss in #2519) enhancement: Performance improvements for loft use space and loft use vcluster commands (by @lizardruss in #2609) fix(agent): NetworkPeer proxy is now running highly available on all agent replicas (by @ThomasK33 in #2527) fix(loftctl): Loftctl is now printing additional debug messages if the --debug flag is set (by @ThomasK33 in #2521) fix: The Platform will now wait with project deletion until the underlying namespace is deleted correctly (by @neogopher in #2544) fix: fix an issue where generic sync was blocked without pro subscription (by @rohantmp in #2537) fix: Fixed an IPAM race condition, potentially causing multiple network peers to have the same IP assigned on startup (by @ThomasK33 in #2550) fix: The cluster controller will update a cluster's phase status during its initialization (by @ThomasK33 in #2566) fix: Fixed an issue where the Platform wasn't able to deploy vClusters with version v0.20.0-alpha.1 or newer (by @FabianKramm in #2572) fix: Automatically fix incorrect IPAM state in NetworkPeer CRDs (by @ThomasK33 in #2573) fix: Fixed an issue with clusters not connecting to a highly available control plane (by @ThomasK33 in #2579) fix: Fixed an issue where ts net server would restart if multiple access keys were found (by @FabianKramm in #2612) chore: Updated default vCluster version to 0.19.5 (by @ThomasK33 in #2551)