New version of the AccentOS 3.0 software platform has been released.

New in AccentOS 3.0:

  • Support for Linux 6.x with improved virtualization functionality, version adapted for licensed operating systems.
  • Support for fully qualified domain names for the instance hostname.
  • Improved MariaDB database for storing key configuration data.
  • Increasing the OpenStack release cycle to 12 months reduces the frequency of updates.
  • Automatic system deployment using podman containers.
  • Support for search attributes in Flavor (CPU, RAM, etc.), obtaining a filter for the list of attributes by deployment ID and key.
  • Create a cluster instance from a list of templates.
  • Update and display quota information when the cluster changes.
  • Using an availability zone in several projects, incl. "IN-ADDR.ARPA classless delegation" (RFC 2317), which allows you to assign DNS PTR records to IP addresses in small blocks without creating a DNS zone for each address.
  • Magnum support in the admin UI platform.
  • Support for global navigation in the upper left corner of the page, progress bar, adding cancellation when using a modal form when uploading a file.


In the field of equipment management:

  • CPU power may be limited by the system.
  • Full vGPU control function, similar to Nova.
  • Support for live migration of virtual machines with GPUs.
  • API for GPU attribute lifecycle management.
  • PCI devices are monitored and maintained locally on the server.
  • Unified management of a variety of devices, such as FPGAs.
  • Host naming by FQDNs has become available.
  • Disk devices are named by UUID to avoid confusion.
  • Added support for expanding attachable volumes for Cinder.
  • New server drivers have been added to the Cinder block storage module: HPE XP iSCSI and FC, Fungible NVMe-TCP, NetApp NVMe-TCP storage drivers.
  • Added support for Trisync replication for the Pure driver, support for volume group group snapshots for the IBM SVF driver, Unisphere 10 support for the Dell EMC PowerMax driver, and host based migration and retype support for the Hitachi VSP driver.


In the field of security:

  • Implemented a role model (sRBAC) for the Neutron network control module.The Glance module applies sRBAC role model policies by default.
  • Authentication of external servers using OAuth 2.0 Mutual-TLS has been implemented.
  • Implemented SSL Keystone verification via configuration in Skyline UI.
  • Logging UI Skyline without a hard path.
  • Added support for deploying the validate-config CLI, which will validate service configuration files using oslo-config-validator.
  • Trove service deployment now supports internal TLS.
  • Nginx.conf.j2 supports both http and https (default is http).
  • If you select API microversion 2.95, evacuated virtual machines will remain stopped on the target host until they are manually started.


For operators:

  • Added support for the resource allocation API in the Blazar command line client, allowing you to understand which hosts are allocated to each reservation.
  • Added random selection option for physical host reservation.
  • Implemented a load balancer with various scenarios.
  • Implemented placement of tunneled networks and shared resources.
  • Enabled host multi-segment support for the ML2/OVS driver.
  • Implemented Neutron dynamic routing using ML2/OVN.
  • An OVN agent has been created that implements functions not provided by the ovn controller. The metadata service will be migrated first.
  • Improved Tacker (MANO) module UI support for NFV services.
  • Automatic CNF scaling via performance management threshold interface.
  • Updating the network configuration using the current VNF package API.
  • AutoHeal and AutoScale are run by external monitoring tools such as Prometheus without NFVO.



  • Implemented improved integration of networking functions with Kubernetes.
  • The Magnum module has been updated to support Kubernetes v1.24 running on Fedora CoreOS 36 and 37.
  • All containerized system services now run in Podman containers.
  • Support for some new options when creating a zun container.
  • APIs for transferring shared resources between projects are available for the network file system.
  • Users can specify metadata when creating their shared networks. The behavior should be similar to Manila shares, and users will be able to update and delete resource data metadata.
  • Advanced capabilities for rapid deployment of AI platforms, including Sahara (ML Hadoop, Spark -aaS).