From mboxrd@z Thu Jan 1 00:00:00 1970 From: Kirti Wankhede Subject: Re: [RFC PATCH v1 1/1] vGPU core driver : to provide common interface for vGPU. Date: Wed, 3 Feb 2016 18:51:03 +0530 Message-ID: <56B1FEBF.4050100@nvidia.com> References: <56AFD231.3010404@nvidia.com> <56B00AD7.6070103@nvidia.com> <56B07605.7030905@nvidia.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Cc: "Ruan, Shuai" , "Song, Jike" , "Lv, Zhiyuan" , "kvm@vger.kernel.org" , qemu-devel To: "Tian, Kevin" , Alex Williamson , Neo Jia , Gerd Hoffmann , Paolo Bonzini Return-path: Received: from hqemgate16.nvidia.com ([216.228.121.65]:2593 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751407AbcBCNVJ (ORCPT ); Wed, 3 Feb 2016 08:21:09 -0500 In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: On 2/3/2016 11:26 AM, Tian, Kevin wrote: [...] >>>>> * @vgpu_create: Called to allocate basic resouces in graphics >>>>> * driver for a particular vgpu. >>>>> * @dev: physical pci device structure on which vgpu >>>>> * should be created >>>>> * @uuid: uuid for which VM it is intended to >>>>> * @instance: vgpu instance in that VM >>>>> * @vgpu_id: This represents the type of vgpu to be >>>>> * created >>>>> * Returns integer: success (0) or error (< 0) >>> >>> Specifically for Intel GVT-g we didn't hard partition resource among vGPUs. >>> Instead we allow user to accurately control how many physical resources >>> are allocated to a vGPU. So this interface should be extensible to allow >>> vendor specific resource control. >>> >> >> This interface forwards the create request to vendor/GPU driver >> informing about which physical GPU this request is intended for and the >> type of vGPU. Then its vendor/GPU driver's responsibility to do >> resources allocation and manage resources in their own driver. > > However the current parameter definition disallows resource configuration > information passed from user. As I said, Intel GVT-g doesn't do static > allocation based on type. We provide flexibility to user for fine-grained > resource management. > int (*vgpu_create)(struct pci_dev *dev, uuid_le uuid, - uint32_t instance, uint32_t vgpu_id); + uint32_t instance, char *vgpu_params); If we change integer vgpu_id parameter to char *vgpu_param then GPU driver can have multiple parameters. Suppose there is a GPU at 0000:85:00.0, then to create vgpu: # echo "::" > /sys/bus/pci/devices/0000\:85\:00.0/vgpu_create Common vgpu module will not parse vgpu_params string, it will be forwarded to GPU driver, then its GPU driver's responsibility to parse the string and act accordingly. This should give flexibility to have multiple parameters for GPU driver. >>>>> * >>>>> * Physical GPU that support vGPU should be register with vgpu module with >>>>> * gpu_device_ops structure. >>>>> */ >>>>> >>> >>> Also it'd be good design to allow extensible usages, such as statistics, and >>> other vendor specific control knobs (e.g. foreground/background VM switch >>> in Intel GVT-g, etc.) >>> >> >> Can you elaborate on what other control knobs that would be needed? >> > > Some examples: > > - foreground/background VM switch > - resource query > - various statistics info > - virtual monitor configuration > - ... > > Since this vgpu core driver will become the central point for all vgpu > management, we need provide an easy way for vendor specific extension, > e.g. exposing above callbacks as sysfs nodes and then vendor can create > its own extensions under subdirectory (/intel, /nvidia, ...). > > Ok. Makes sense. Are these parameters per physical device or per vgpu device? Adding attribute groups to gpu_device_ops structure would provide a way to add vendor specific extensions. I think we need two types of attributes here, per physical device and per vgpu device. Right? const struct attribute_group **dev_groups; const struct attribute_group **vgpu_groups; Thanks, Kirti. > Thanks > Kevin > From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45371) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aQxMz-00087P-0m for qemu-devel@nongnu.org; Wed, 03 Feb 2016 08:21:20 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aQxMt-0006vv-43 for qemu-devel@nongnu.org; Wed, 03 Feb 2016 08:21:16 -0500 Received: from hqemgate16.nvidia.com ([216.228.121.65]:2594) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aQxMs-0006vL-Rr for qemu-devel@nongnu.org; Wed, 03 Feb 2016 08:21:11 -0500 References: <56AFD231.3010404@nvidia.com> <56B00AD7.6070103@nvidia.com> <56B07605.7030905@nvidia.com> From: Kirti Wankhede Message-ID: <56B1FEBF.4050100@nvidia.com> Date: Wed, 3 Feb 2016 18:51:03 +0530 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC PATCH v1 1/1] vGPU core driver : to provide common interface for vGPU. List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Tian, Kevin" , Alex Williamson , Neo Jia , Gerd Hoffmann , Paolo Bonzini Cc: "Ruan, Shuai" , "Song, Jike" , "Lv, Zhiyuan" , "kvm@vger.kernel.org" , qemu-devel On 2/3/2016 11:26 AM, Tian, Kevin wrote: [...] >>>>> * @vgpu_create: Called to allocate basic resouces in graphics >>>>> * driver for a particular vgpu. >>>>> * @dev: physical pci device structure on which vgpu >>>>> * should be created >>>>> * @uuid: uuid for which VM it is intended to >>>>> * @instance: vgpu instance in that VM >>>>> * @vgpu_id: This represents the type of vgpu to be >>>>> * created >>>>> * Returns integer: success (0) or error (< 0) >>> >>> Specifically for Intel GVT-g we didn't hard partition resource among vGPUs. >>> Instead we allow user to accurately control how many physical resources >>> are allocated to a vGPU. So this interface should be extensible to allow >>> vendor specific resource control. >>> >> >> This interface forwards the create request to vendor/GPU driver >> informing about which physical GPU this request is intended for and the >> type of vGPU. Then its vendor/GPU driver's responsibility to do >> resources allocation and manage resources in their own driver. > > However the current parameter definition disallows resource configuration > information passed from user. As I said, Intel GVT-g doesn't do static > allocation based on type. We provide flexibility to user for fine-grained > resource management. > int (*vgpu_create)(struct pci_dev *dev, uuid_le uuid, - uint32_t instance, uint32_t vgpu_id); + uint32_t instance, char *vgpu_params); If we change integer vgpu_id parameter to char *vgpu_param then GPU driver can have multiple parameters. Suppose there is a GPU at 0000:85:00.0, then to create vgpu: # echo "::" > /sys/bus/pci/devices/0000\:85\:00.0/vgpu_create Common vgpu module will not parse vgpu_params string, it will be forwarded to GPU driver, then its GPU driver's responsibility to parse the string and act accordingly. This should give flexibility to have multiple parameters for GPU driver. >>>>> * >>>>> * Physical GPU that support vGPU should be register with vgpu module with >>>>> * gpu_device_ops structure. >>>>> */ >>>>> >>> >>> Also it'd be good design to allow extensible usages, such as statistics, and >>> other vendor specific control knobs (e.g. foreground/background VM switch >>> in Intel GVT-g, etc.) >>> >> >> Can you elaborate on what other control knobs that would be needed? >> > > Some examples: > > - foreground/background VM switch > - resource query > - various statistics info > - virtual monitor configuration > - ... > > Since this vgpu core driver will become the central point for all vgpu > management, we need provide an easy way for vendor specific extension, > e.g. exposing above callbacks as sysfs nodes and then vendor can create > its own extensions under subdirectory (/intel, /nvidia, ...). > > Ok. Makes sense. Are these parameters per physical device or per vgpu device? Adding attribute groups to gpu_device_ops structure would provide a way to add vendor specific extensions. I think we need two types of attributes here, per physical device and per vgpu device. Right? const struct attribute_group **dev_groups; const struct attribute_group **vgpu_groups; Thanks, Kirti. > Thanks > Kevin >