What's New
We’re excited to introduce HyperCX 10.9, the new major HyperCX OpenNebula release.
We invite you to explore the features, join the community conversations, and give your feedback on this final release.
Base Platform
HyperCX 10.9 runs on Ubuntu 22.04 LTS (Jammy Jellyfish) and is kept up to date with the latest security and maintenance patches.
Monitoring
- New Grafana dashboards for HyperCX BEATS are now available for monitoring the cluster. There are dashboards for hosts, VMs, the cluster, the storage performance and the VPN of the cluster. There is also a dashboard with a worldmap, where it can be checked all the successfull access to the cloud and those who were banned (failed at least 3 times trying to get access). This monitoring will now be available for HyperCX BENTO and HCX1 clusters.
- The clients are now able to access the Grafana Monitoring from the public network, as long as he cluster be directly connected to internet. If that is not the case, HyperCX BEATS could still be reached through the cluster VPN, with anonymous credentials, from this secured channel. By default, it will take you to the default dashboard “Cloud”.
- New dashboard “Infrastructure Containers” (backported from master branch) for HyperCX BEATS, showing how much resources are being used for the cloud infrastructure containers.
Backups
- A fully revamped backup solution has been added, now based on datastore back-ends instead of a private Vault marketplace as offered by the previous solution. A new type of image has been introduced to represent datastores. This allows you to implement tier-based backup policies, leverage access control and quota systems, and configure full and incremental backups with different filesystem freeze modes (NONE , SUSPEND, AGENT).
- HyperCX VAULT is now twice as fast and produces more consistent backups than the old solution. Backups are done directly on the Vault server, with compression functionality.
- Changed from version based backups to Incremental based backups, improving the space utilization dedicated to backups images.
- Another highlight is the introduction of Backup Jobs. Backup Jobs enable the definition of backup operations that involve multiple VMs, simplifying the management of your cloud infrastructure. It lets you establish a unified backup policy for multiple VMs, encompassing schedules, backup retention, and filesystem freeze mode, as well as maintain control over the execution of backup operations, ensuring they do not disrupt ongoing workloads. Moreover, it allows for the monitoring of the progress of backup operations, essential to estimate backup times accurately.
Storage
- Allow the resize of qcow2 disks with snapshots, when using local storage solution (HW RAID 5) on HCX1 deployments.
- Adding support for new SDS for BENTO, based on Ceph and integrated for our HyperCX HCI architecture.
- New SDS is based on block storage solution with:
- Ceph Replica 3 (for
>=3BENTO nodes): Clients will have all its production data replicated 3 times. The cluster can tolerate the loss of up to 2 copies of an object without losing data integrity, but to keep the cluster fully operational and healthy, it is generally safe to assume 1 host failure. Data loss will be only possible if 3 storage-nodes are affected. - Ceph Erasure Coding 3+2 (
>=5BENTO nodes): It offers better storage efficiency than 3-way replication, with an effective usable capacity of about 60% of raw storage, while still maintaining strong fault tolerance. The storage pool can tolerate up to 2 host failures without losing data integrity or availability.
- Ceph Replica 3 (for
KVM
Now it’s possible to fine-tune the selection of CPU flags, to specify io_uring driver for disks, define a custom video device for VMs and automatically define default set timers to improve Windows perform ance. These options have been implemented in the driver, and for the ones where it makes sense, are already exposed through Sunstone.
Network
A new ability has been added to update virtual networks, which applies changes automatically to all running virtual machines with network interfaces attached to the virtual networks. No more reattaching NICs or relaunching VMs to change a network parameter.
PCI Passthrough
There are also a series of improvements in the PCI Passthrough functionality, oriented towards getting the optimal performance out of your hardware. These include improved integration with libvirt/QEMU (only activating the relevant virtual function on attach), predictable PCI addresses, configuration of virtual functions through IP link, and support for attaching and detaching NICs with PCI attributes, among others.