All of lore.kernel.org
 help / color / mirror / Atom feed
From: Adit Ranadive <aditr-pghWNbHTmq7QT0dZR+AlfA@public.gmane.org>
To: Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Marcel Apfelbaum <marcel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: qemu-devel-qX2TKyscuCcdnm+yROfE0A@public.gmane.org,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	yuval.shaia-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org
Subject: Re: [Qemu-devel] [PATCH RFC] hw/pvrdma: Proposal of a new pvrdma device
Date: Thu, 30 Mar 2017 16:38:45 -0700	[thread overview]
Message-ID: <ea171d6c-871b-2cf0-148c-ca7cd85c0ecd@vmware.com> (raw)
In-Reply-To: <5e952524-7c2d-b4da-4bd7-6437830a40d8-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>

On Thu Mar 30 2017 13:28:21 GMT-0700 (PDT), Doug Ledford wrote:
> On 3/30/17 9:13 AM, Leon Romanovsky wrote:
> > On Thu, Mar 30, 2017 at 02:12:21PM +0300, Marcel Apfelbaum wrote:
> > > From: Yuval Shaia <yuval.shaia-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
> > >
> > >  Hi,
> > >
> > >  General description
> > >  ===================
> > >  This is a very early RFC of a new RoCE emulated device
> > >  that enables guests to use the RDMA stack without having
> > >  a real hardware in the host.
> > >
> > >  The current implementation supports only VM to VM communication
> > >  on the same host.
> > >  Down the road we plan to make possible to be able to support
> > >  inter-machine communication by utilizing physical RoCE devices
> > >  or Soft RoCE.
> > >
> > >  The goals are:
> > >  - Reach fast and secure loos-less Inter-VM data exchange.
> > >  - Support remote VMs or bare metal machines.
> > >  - Allow VMs migration.
> > >  - Do not require to pin all VM memory.
> > >
> > >
> > >  Objective
> > >  =========
> > >  Have a QEMU implementation of the PVRDMA device. We aim to do so without
> > >  any change in the PVRDMA guest driver which is already merged into the
> > >  upstream kernel.
> > >
> > >
> > >  RFC status
> > >  ===========
> > >  The project is in early development stages and supports
> > >  only basic send/receive operations.
> > >
> > >  We present it so we can get feedbacks on design,
> > >  feature demands and to receive comments from the
> > >  community pointing us to the "right" direction.
> >
> > If to judge by the feedback which you got from RDMA community
> > for kernel proposal [1], this community failed to understand:
> > 1. Why do you need new module?
> 
> In this case, this is a qemu module to allow qemu to provide a virt rdma device to guests that is compatible with the device provided by VMWare's ESX product.  Right now, the vmware_pvrdma driver works only when the guest is running on a VMWare ESX server product, this would change that.  Marcel mentioned that they are currently making it compatible because that's the easiest/quickest thing to do, but in the future they might extend beyond what VMWare's virt rdma driver provides/uses and might then need to either modify it to work with their extensions or fork and create their own virt client driver.
> 
> > 2. Why existing solutions are not enough and can't be extended?
> 
> This patch is against the qemu source code, not the kernel.  There is no other solution in the qemu source code, so there is no existing solution to extend.
> 
> > 3. Why RXE (SoftRoCE) can't be extended to perform this inter-VM
> >    communication via virtual NIC?
> 
> Eventually they want this to work on real hardware, and to be more or less transparent to the guest.  They will need to make it independent of the kernel hardware/driver in use.  That means their own virt driver, then the virt driver will eventually hook into whatever hardware is present on the system, or failing that, fall back to soft RoCE or soft iWARP if that ever makes it in the kernel.
>

Hmm, this looks quite interesting. Though I'm not surprised, the PVRDMA 
device spec is relatively straightforward.
I would have definitely mentioned this (if I knew about it) during my 
OFA workshop talk a couple of days ago :).

Doug's right. I mean basically, this looks like a QEMU version of our PVRDMA
backend.

Thanks,
Adit
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

WARNING: multiple messages have this Message-ID (diff)
From: Adit Ranadive <aditr@vmware.com>
To: Doug Ledford <dledford@redhat.com>,
	Leon Romanovsky <leon@kernel.org>,
	Marcel Apfelbaum <marcel@redhat.com>
Cc: qemu-devel@nongnu.org, linux-rdma@vger.kernel.org,
	yuval.shaia@oracle.com
Subject: Re: [Qemu-devel] [PATCH RFC] hw/pvrdma: Proposal of a new pvrdma device
Date: Thu, 30 Mar 2017 16:38:45 -0700	[thread overview]
Message-ID: <ea171d6c-871b-2cf0-148c-ca7cd85c0ecd@vmware.com> (raw)
In-Reply-To: <5e952524-7c2d-b4da-4bd7-6437830a40d8@redhat.com>

On Thu Mar 30 2017 13:28:21 GMT-0700 (PDT), Doug Ledford wrote:
> On 3/30/17 9:13 AM, Leon Romanovsky wrote:
> > On Thu, Mar 30, 2017 at 02:12:21PM +0300, Marcel Apfelbaum wrote:
> > > From: Yuval Shaia <yuval.shaia@oracle.com>
> > >
> > >  Hi,
> > >
> > >  General description
> > >  ===================
> > >  This is a very early RFC of a new RoCE emulated device
> > >  that enables guests to use the RDMA stack without having
> > >  a real hardware in the host.
> > >
> > >  The current implementation supports only VM to VM communication
> > >  on the same host.
> > >  Down the road we plan to make possible to be able to support
> > >  inter-machine communication by utilizing physical RoCE devices
> > >  or Soft RoCE.
> > >
> > >  The goals are:
> > >  - Reach fast and secure loos-less Inter-VM data exchange.
> > >  - Support remote VMs or bare metal machines.
> > >  - Allow VMs migration.
> > >  - Do not require to pin all VM memory.
> > >
> > >
> > >  Objective
> > >  =========
> > >  Have a QEMU implementation of the PVRDMA device. We aim to do so without
> > >  any change in the PVRDMA guest driver which is already merged into the
> > >  upstream kernel.
> > >
> > >
> > >  RFC status
> > >  ===========
> > >  The project is in early development stages and supports
> > >  only basic send/receive operations.
> > >
> > >  We present it so we can get feedbacks on design,
> > >  feature demands and to receive comments from the
> > >  community pointing us to the "right" direction.
> >
> > If to judge by the feedback which you got from RDMA community
> > for kernel proposal [1], this community failed to understand:
> > 1. Why do you need new module?
> 
> In this case, this is a qemu module to allow qemu to provide a virt rdma device to guests that is compatible with the device provided by VMWare's ESX product.  Right now, the vmware_pvrdma driver works only when the guest is running on a VMWare ESX server product, this would change that.  Marcel mentioned that they are currently making it compatible because that's the easiest/quickest thing to do, but in the future they might extend beyond what VMWare's virt rdma driver provides/uses and might then need to either modify it to work with their extensions or fork and create their own virt client driver.
> 
> > 2. Why existing solutions are not enough and can't be extended?
> 
> This patch is against the qemu source code, not the kernel.  There is no other solution in the qemu source code, so there is no existing solution to extend.
> 
> > 3. Why RXE (SoftRoCE) can't be extended to perform this inter-VM
> >    communication via virtual NIC?
> 
> Eventually they want this to work on real hardware, and to be more or less transparent to the guest.  They will need to make it independent of the kernel hardware/driver in use.  That means their own virt driver, then the virt driver will eventually hook into whatever hardware is present on the system, or failing that, fall back to soft RoCE or soft iWARP if that ever makes it in the kernel.
>

Hmm, this looks quite interesting. Though I'm not surprised, the PVRDMA 
device spec is relatively straightforward.
I would have definitely mentioned this (if I knew about it) during my 
OFA workshop talk a couple of days ago :).

Doug's right. I mean basically, this looks like a QEMU version of our PVRDMA
backend.

Thanks,
Adit

  parent reply	other threads:[~2017-03-30 23:38 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-30 11:12 [Qemu-devel] [PATCH RFC] hw/pvrdma: Proposal of a new pvrdma device Marcel Apfelbaum
2017-03-30 11:12 ` Marcel Apfelbaum
     [not found] ` <1490872341-9959-1-git-send-email-marcel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2017-03-30 14:13   ` Leon Romanovsky
2017-03-30 14:13     ` Leon Romanovsky
     [not found]     ` <20170330141314.GM20443-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2017-03-30 20:28       ` Doug Ledford
2017-03-30 20:28         ` Doug Ledford
     [not found]         ` <5e952524-7c2d-b4da-4bd7-6437830a40d8-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2017-03-30 23:38           ` Adit Ranadive [this message]
2017-03-30 23:38             ` Adit Ranadive
     [not found]             ` <ea171d6c-871b-2cf0-148c-ca7cd85c0ecd-pghWNbHTmq7QT0dZR+AlfA@public.gmane.org>
2017-03-31 15:50               ` Marcel Apfelbaum
2017-03-31 15:50                 ` Marcel Apfelbaum
2017-03-31 15:45           ` Marcel Apfelbaum
2017-03-31 15:45             ` Marcel Apfelbaum
     [not found]             ` <f7f3fc0e-0a75-2fdc-b3c2-6c3d34ff2978-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2017-04-03  6:23               ` Leon Romanovsky
2017-04-03  6:23                 ` Leon Romanovsky
     [not found]                 ` <20170403062314.GO20443-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2017-04-04 13:38                   ` Marcel Apfelbaum
2017-04-04 13:38                     ` Marcel Apfelbaum
     [not found]                     ` <aa400244-426a-ea62-0ccf-ac5adb76fdd1-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2017-04-04 16:01                       ` Jason Gunthorpe
2017-04-04 16:01                         ` Jason Gunthorpe
     [not found]                         ` <20170404160155.GA1750-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
2017-04-06 19:42                           ` Yuval Shaia
2017-04-06 19:42                             ` Yuval Shaia
2017-04-06 20:38                             ` Jason Gunthorpe
2017-04-06 20:38                               ` Jason Gunthorpe
2017-04-04 17:33                       ` Leon Romanovsky
2017-04-04 17:33                         ` Leon Romanovsky
     [not found]                         ` <20170404173349.GY20443-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2017-04-06 19:45                           ` Yuval Shaia
2017-04-06 19:45                             ` Yuval Shaia
2017-04-06 20:54                             ` Jason Gunthorpe
2017-04-06 20:54                               ` Jason Gunthorpe
2017-04-03  6:27           ` Leon Romanovsky
2017-04-03  6:27             ` Leon Romanovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ea171d6c-871b-2cf0-148c-ca7cd85c0ecd@vmware.com \
    --to=aditr-pghwnbhtmq7qt0dzr+alfa@public.gmane.org \
    --cc=dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
    --cc=linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=marcel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=qemu-devel-qX2TKyscuCcdnm+yROfE0A@public.gmane.org \
    --cc=yuval.shaia-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.