Nvidia and VMware declared that a new software product is being created to allow customers to virtualize GPUs either as part of VMware Cloud on AWS or on-premise. Both the companies have announced that it will be the primary hybrid cloud offering that would let various firms to use GPUs to speed up AI, deep learning, or machine learning workloads. The current data centers will have firms using GPUs to power AI or deep learning analytics. According to Nvidia VP John Fanelli, the types of workloads are the reason for processing on-premise in data centers or some processing in clouds.
Nvidia’s new Virtual Compute Server (vComputeServer) software is being optimized for VMware’s vSphere and it will be made available for almost all OEMs like Supermicro, Cisco, HPE, Dell, and Lenovo. The plan to bring AI to the data center is to help data center IT admins run and fabricate their data centers for proper functioning in a virtualized environment. The vComputeServer may be using Nvidia’s Rapids, data processing, and machine learning libraries so as to speed GPU-acceleration for data science workflows. The system will be supporting GPU sharing and GPU aggregation for a single user itself.
The licensing of the technology is made on a per GPU basis and virtualization is known to have a 5% impact on the performance based on the workload. VMware will be made available on VMware Cloud on AWS and will be processing using VMware’s SDDC using the Amazon Web Services cloud. On another note, in the previous month, the company had declared that it will be planning to sign an agreement with Bitfusion to strike an acquisition deal with it. This Texas-based company is known for manufacturing an elastic and virtual OS for hardware accelerators. The acquisition is assumed to help Nvidia strengthen its hold in the software and cloud computing world.