linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)
@ 2016-05-15 16:52 Dexuan Cui
  2016-05-15 17:16 ` David Miller
  0 siblings, 1 reply; 7+ messages in thread
From: Dexuan Cui @ 2016-05-15 16:52 UTC (permalink / raw)
  To: gregkh, davem, netdev, linux-kernel, devel, olaf, apw, jasowang,
	cavery, kys, haiyangz
  Cc: joe, vkuznets

Hyper-V Sockets (hv_sock) supplies a byte-stream based communication
mechanism between the host and the guest. It's somewhat like TCP over
VMBus, but the transportation layer (VMBus) is much simpler than IP.

With Hyper-V Sockets, applications between the host and the guest can talk
to each other directly by the traditional BSD-style socket APIs.

Hyper-V Sockets is only available on new Windows hosts, like Windows Server
2016. More info is in this article "Make your own integration services":
https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/develop/make_mgmt_service

The patch implements the necessary support in the guest side by
introducing a new socket address family AF_HYPERV.

You can also get the patch by:
https://github.com/dcui/linux/commits/decui/hv_sock/net-next/20160512_v10

Note: the VMBus driver side's supporting patches have been in the mainline
tree.

I know the kernel has already had a VM Sockets driver (AF_VSOCK) based
on VMware VMCI (net/vmw_vsock/, drivers/misc/vmw_vmci), and KVM is
proposing AF_VSOCK of virtio version:
http://marc.info/?l=linux-netdev&m=145952064004765&w=2

However, though Hyper-V Sockets may seem conceptually similar to
AF_VOSCK, there are differences in the transportation layer, and IMO these
make the direct code reusing impractical:

1. In AF_VSOCK, the endpoint type is: <u32 ContextID, u32 Port>, but in
AF_HYPERV, the endpoint type is: <GUID VM_ID, GUID ServiceID>. Here GUID
is 128-bit.

2. AF_VSOCK supports SOCK_DGRAM, while AF_HYPERV doesn't.

3. AF_VSOCK supports some special sock opts, like SO_VM_SOCKETS_BUFFER_SIZE,
SO_VM_SOCKETS_BUFFER_MIN/MAX_SIZE and SO_VM_SOCKETS_CONNECT_TIMEOUT.
These are meaningless to AF_HYPERV.

4. Some AF_VSOCK's VMCI transportation ops are meanless to AF_HYPERV/VMBus,
like .notify_recv_init
.notify_recv_pre_block
.notify_recv_pre_dequeue
.notify_recv_post_dequeue
.notify_send_init
.notify_send_pre_block
.notify_send_pre_enqueue
.notify_send_post_enqueue
etc.

So I think we'd better introduce a new address family: AF_HYPERV.

Please review the patch.

Looking forward to your comments, especially comments from David. :-)

Changes since v1:
- updated "[PATCH 6/7] hvsock: introduce Hyper-V VM Sockets feature"
- added __init and __exit for the module init/exit functions
- net/hv_sock/Kconfig: "default m" -> "default m if HYPERV"
- MODULE_LICENSE: "Dual MIT/GPL" -> "Dual BSD/GPL"

Changes since v2:
- fixed various coding issue pointed out by David Miller
- fixed indentation issues
- removed pr_debug in net/hv_sock/af_hvsock.c
- used reverse-Chrismas-tree style for local variables.
- EXPORT_SYMBOL -> EXPORT_SYMBOL_GPL

Changes since v3:
- fixed a few coding issue pointed by Vitaly Kuznetsov and Dan Carpenter
- fixed the ret value in vmbus_recvpacket_hvsock on error
- fixed the style of multi-line comment: vmbus_get_hvsock_rw_status()

Changes since v4 (https://lkml.org/lkml/2015/7/28/404):
- addressed all the comments about V4.
- treat the hvsock offers/channels as special VMBus devices
- add a mechanism to pass hvsock events to the hvsock driver
- fixed some corner cases with proper locking when a connection is closed
- rebased to the latest Greg's tree

Changes since v5 (https://lkml.org/lkml/2015/12/24/103):
- addressed the coding style issues (Vitaly Kuznetsov & David Miller, thanks!)
- used a better coding for the per-channel rescind callback (Thank Vitaly!)
- avoided the introduction of new VMBUS driver APIs vmbus_sendpacket_hvsock()
and vmbus_recvpacket_hvsock() and used vmbus_sendpacket()/vmbus_recvpacket()
in the higher level (i.e., the vmsock driver). Thank Vitaly!

Changes since v6 (http://lkml.iu.edu/hypermail/linux/kernel/1601.3/01813.html)
- only a few minor changes of coding style and comments

Changes since v7
- a few minor changes of coding style: thanks, Joe Perches!
- added some lines of comments about GUID/UUID before the struct sockaddr_hv.

Changes since v8
- removed the unnecessary __packed for some definitions: thanks, David!
- hvsock_open_connection:  use offer.u.pipe.user_def[0] to know the connection
and reorganized the function
direction 
- reorganized the code according to suggestions from Cathy Avery: split big
functions into small ones, set .setsockopt and getsockopt to
sock_no_setsockopt/sock_no_getsockopt
- inline'd some small list helper functions

Changes since v9
- minimized struct hvsock_sock by making the send/recv buffers pointers.
   the buffers are allocated by kmalloc() in __hvsock_create() now.
- minimized the sizes of the send/recv buffers and the vmbus ringbuffers.

Changes since v10

1) add module params: send_ring_page, recv_ring_page. They can be used to
enlarge the ringbuffer size to get better performance, e.g.,
# modprobe hv_sock  recv_ring_page=16 send_ring_page=16
By default, recv_ring_page is 3 and send_ring_page is 2.

2) add module param max_socket_number (the default is 1024).
A user can enlarge the number to create more than 1024 hv_sock sockets.
By default, 1024 sockets take about 1024 * (3+2+1+1) * 4KB = 28M bytes.
(Here 1+1 means 1 page for send/recv buffers per connection, respectively.)

3) implement the TODO in hvsock_shutdown().

4) fix a bug in hvsock_close_connection():
   I remove "sk->sk_socket->state = SS_UNCONNECTED;" -- actually this line
is not really useful. For a connection triggered by a host app’s connect(),
sk->sk_socket remains NULL before the connection is accepted by the server
app (in Linux VM): see hvsock_accept() -> hvsock_accept_wait() ->
sock_graft(connected, newsock). If the host app exits before the server
app’s accept() returns, the host can send a rescind-message to close the
connection and later in the Linux VM’s message handler 
i.e. vmbus_onoffer_rescind()), Linux will get a NULL de-referencing crash. 

5) fix a bug in hvsock_open_connection()
  I move the vmbus_set_chn_rescind_callback() to a later place, because
when vmbus_open() fails, hvsock_close_connection() can do nothing and we
count on vmbus_onoffer_rescind() -> vmbus_device_unregister() to clean up
the device.

6) some stylistic modificiation.

Dexuan Cui (1):
  hv_sock: introduce Hyper-V Sockets

 MAINTAINERS                 |    2 +
 include/linux/hyperv.h      |   14 +
 include/linux/socket.h      |    4 +-
 include/net/af_hvsock.h     |   78 +++
 include/uapi/linux/hyperv.h |   25 +
 net/Kconfig                 |    1 +
 net/Makefile                |    1 +
 net/hv_sock/Kconfig         |   10 +
 net/hv_sock/Makefile        |    3 +
 net/hv_sock/af_hvsock.c     | 1520 +++++++++++++++++++++++++++++++++++++++++++
 10 files changed, 1657 insertions(+), 1 deletion(-)
 create mode 100644 include/net/af_hvsock.h
 create mode 100644 net/hv_sock/Kconfig
 create mode 100644 net/hv_sock/Makefile
 create mode 100644 net/hv_sock/af_hvsock.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)
  2016-05-15 16:52 [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock) Dexuan Cui
@ 2016-05-15 17:16 ` David Miller
  2016-05-17  2:45   ` Dexuan Cui
  0 siblings, 1 reply; 7+ messages in thread
From: David Miller @ 2016-05-15 17:16 UTC (permalink / raw)
  To: decui
  Cc: gregkh, netdev, linux-kernel, devel, olaf, apw, jasowang, cavery,
	kys, haiyangz, joe, vkuznets

From: Dexuan Cui <decui@microsoft.com>
Date: Sun, 15 May 2016 09:52:42 -0700

> Changes since v10
> 
> 1) add module params: send_ring_page, recv_ring_page. They can be used to
> enlarge the ringbuffer size to get better performance, e.g.,
> # modprobe hv_sock  recv_ring_page=16 send_ring_page=16
> By default, recv_ring_page is 3 and send_ring_page is 2.
> 
> 2) add module param max_socket_number (the default is 1024).
> A user can enlarge the number to create more than 1024 hv_sock sockets.
> By default, 1024 sockets take about 1024 * (3+2+1+1) * 4KB = 28M bytes.
> (Here 1+1 means 1 page for send/recv buffers per connection, respectively.)

This is papering around my objections, and create module parameters which
I am fundamentally against.

You're making the facility unusable by default, just to work around my
memory consumption concerns.

What will end up happening is that everyone will simply increase the
values.

You're not really addressing the core issue, and I will be ignoring you
future submissions of this change until you do.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)
  2016-05-15 17:16 ` David Miller
@ 2016-05-17  2:45   ` Dexuan Cui
  2016-05-19  0:59     ` Dexuan Cui
  0 siblings, 1 reply; 7+ messages in thread
From: Dexuan Cui @ 2016-05-17  2:45 UTC (permalink / raw)
  To: David Miller
  Cc: gregkh, netdev, linux-kernel, devel, olaf, apw, jasowang, cavery,
	KY Srinivasan, Haiyang Zhang, joe, vkuznets

> From: David Miller [mailto:davem@davemloft.net]
> Sent: Monday, May 16, 2016 1:16
> To: Dexuan Cui <decui@microsoft.com>
> Cc: gregkh@linuxfoundation.org; netdev@vger.kernel.org; linux-
> kernel@vger.kernel.org; devel@linuxdriverproject.org; olaf@aepfle.de;
> apw@canonical.com; jasowang@redhat.com; cavery@redhat.com; KY
> Srinivasan <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>;
> joe@perches.com; vkuznets@redhat.com
> Subject: Re: [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)
>
> From: Dexuan Cui <decui@microsoft.com>
> Date: Sun, 15 May 2016 09:52:42 -0700
>
> > Changes since v10
> >
> > 1) add module params: send_ring_page, recv_ring_page. They can be used to
> > enlarge the ringbuffer size to get better performance, e.g.,
> > # modprobe hv_sock  recv_ring_page=16 send_ring_page=16
> > By default, recv_ring_page is 3 and send_ring_page is 2.
> >
> > 2) add module param max_socket_number (the default is 1024).
> > A user can enlarge the number to create more than 1024 hv_sock sockets.
> > By default, 1024 sockets take about 1024 * (3+2+1+1) * 4KB = 28M bytes.
> > (Here 1+1 means 1 page for send/recv buffers per connection, respectively.)
>
> This is papering around my objections, and create module parameters which
> I am fundamentally against.
>
> You're making the facility unusable by default, just to work around my
> memory consumption concerns.
>
> What will end up happening is that everyone will simply increase the
> values.
>
> You're not really addressing the core issue, and I will be ignoring you
> future submissions of this change until you do.

David,
I am sorry I came across as ignoring your feedback; that was not my intention.
The current host side design for this feature is such that each socket connection
needs its own channel, which consists of

1.    A ring buffer for host to guest communication
2.    A ring buffer for guest to host communication

The memory for the ring buffers has to be pinned down as this will be accessed
both from interrupt level in Linux guest and from the host OS at any time.

To address your concerns, I am planning to re-implement both the receive path
and the send path so that no additional pinned memory will be needed.

Receive Path:
When the application does a read on the socket, we will dynamically allocate
the buffer and perform the read operation on the incoming ring buffer. Since
we will be in the process context, we can sleep here and will set the
"GFP_KERNEL | __GFP_NOFAIL" flags. This buffer will be freed once the
application consumes all the data.

Send Path:
On the send side, we will construct the payload to be sent directly on the
outgoing ringbuffer.

So, with these changes, the only memory that will be pinned down will be the
memory for the ring buffers on a per-connection basis and this memory will be
pinned down until the connection is torn down.

Please let me know if this addresses your concerns.

 Thanks,
-- Dexuan

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)
  2016-05-17  2:45   ` Dexuan Cui
@ 2016-05-19  0:59     ` Dexuan Cui
  2016-05-19  1:05       ` gregkh
  2016-05-19  4:12       ` David Miller
  0 siblings, 2 replies; 7+ messages in thread
From: Dexuan Cui @ 2016-05-19  0:59 UTC (permalink / raw)
  To: David Miller, KY Srinivasan
  Cc: olaf, gregkh, jasowang, linux-kernel, joe, netdev, apw, devel,
	Haiyang Zhang

> From: devel [mailto:driverdev-devel-bounces@linuxdriverproject.org] On Behalf
> Of Dexuan Cui
> Sent: Tuesday, May 17, 2016 10:46
> To: David Miller <davem@davemloft.net>
> Cc: olaf@aepfle.de; gregkh@linuxfoundation.org; jasowang@redhat.com;
> linux-kernel@vger.kernel.org; joe@perches.com; netdev@vger.kernel.org;
> apw@canonical.com; devel@linuxdriverproject.org; Haiyang Zhang
> <haiyangz@microsoft.com>
> Subject: RE: [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)
> 
> > From: David Miller [mailto:davem@davemloft.net]
> > Sent: Monday, May 16, 2016 1:16
> > To: Dexuan Cui <decui@microsoft.com>
> > Cc: gregkh@linuxfoundation.org; netdev@vger.kernel.org; linux-
> > kernel@vger.kernel.org; devel@linuxdriverproject.org; olaf@aepfle.de;
> > apw@canonical.com; jasowang@redhat.com; cavery@redhat.com; KY
> > Srinivasan <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>;
> > joe@perches.com; vkuznets@redhat.com
> > Subject: Re: [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)
> >
> > From: Dexuan Cui <decui@microsoft.com>
> > Date: Sun, 15 May 2016 09:52:42 -0700
> >
> > > Changes since v10
> > >
> > > 1) add module params: send_ring_page, recv_ring_page. They can be used
> to
> > > enlarge the ringbuffer size to get better performance, e.g.,
> > > # modprobe hv_sock  recv_ring_page=16 send_ring_page=16
> > > By default, recv_ring_page is 3 and send_ring_page is 2.
> > >
> > > 2) add module param max_socket_number (the default is 1024).
> > > A user can enlarge the number to create more than 1024 hv_sock sockets.
> > > By default, 1024 sockets take about 1024 * (3+2+1+1) * 4KB = 28M bytes.
> > > (Here 1+1 means 1 page for send/recv buffers per connection, respectively.)
> >
> > This is papering around my objections, and create module parameters which
> > I am fundamentally against.
> >
> > You're making the facility unusable by default, just to work around my
> > memory consumption concerns.
> >
> > What will end up happening is that everyone will simply increase the
> > values.
> >
> > You're not really addressing the core issue, and I will be ignoring you
> > future submissions of this change until you do.
> 
> David,
> I am sorry I came across as ignoring your feedback; that was not my intention.
> The current host side design for this feature is such that each socket connection
> needs its own channel, which consists of
> 
> 1.    A ring buffer for host to guest communication
> 2.    A ring buffer for guest to host communication
> 
> The memory for the ring buffers has to be pinned down as this will be accessed
> both from interrupt level in Linux guest and from the host OS at any time.
> 
> To address your concerns, I am planning to re-implement both the receive path
> and the send path so that no additional pinned memory will be needed.
> 
> Receive Path:
> When the application does a read on the socket, we will dynamically allocate
> the buffer and perform the read operation on the incoming ring buffer. Since
> we will be in the process context, we can sleep here and will set the
> "GFP_KERNEL | __GFP_NOFAIL" flags. This buffer will be freed once the
> application consumes all the data.
> 
> Send Path:
> On the send side, we will construct the payload to be sent directly on the
> outgoing ringbuffer.
> 
> So, with these changes, the only memory that will be pinned down will be the
> memory for the ring buffers on a per-connection basis and this memory will be
> pinned down until the connection is torn down.
> 
> Please let me know if this addresses your concerns.
> 
> -- Dexuan

Hi David,
Ping. Really appreciate your comment.

 Thanks,
-- Dexuan

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)
  2016-05-19  0:59     ` Dexuan Cui
@ 2016-05-19  1:05       ` gregkh
  2016-05-19  4:12       ` David Miller
  1 sibling, 0 replies; 7+ messages in thread
From: gregkh @ 2016-05-19  1:05 UTC (permalink / raw)
  To: Dexuan Cui
  Cc: David Miller, KY Srinivasan, olaf, jasowang, linux-kernel, joe,
	netdev, apw, devel, Haiyang Zhang

On Thu, May 19, 2016 at 12:59:09AM +0000, Dexuan Cui wrote:
> > From: devel [mailto:driverdev-devel-bounces@linuxdriverproject.org] On Behalf
> > Of Dexuan Cui
> > Sent: Tuesday, May 17, 2016 10:46
> > To: David Miller <davem@davemloft.net>
> > Cc: olaf@aepfle.de; gregkh@linuxfoundation.org; jasowang@redhat.com;
> > linux-kernel@vger.kernel.org; joe@perches.com; netdev@vger.kernel.org;
> > apw@canonical.com; devel@linuxdriverproject.org; Haiyang Zhang
> > <haiyangz@microsoft.com>
> > Subject: RE: [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)
> > 
> > > From: David Miller [mailto:davem@davemloft.net]
> > > Sent: Monday, May 16, 2016 1:16
> > > To: Dexuan Cui <decui@microsoft.com>
> > > Cc: gregkh@linuxfoundation.org; netdev@vger.kernel.org; linux-
> > > kernel@vger.kernel.org; devel@linuxdriverproject.org; olaf@aepfle.de;
> > > apw@canonical.com; jasowang@redhat.com; cavery@redhat.com; KY
> > > Srinivasan <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>;
> > > joe@perches.com; vkuznets@redhat.com
> > > Subject: Re: [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)
> > >
> > > From: Dexuan Cui <decui@microsoft.com>
> > > Date: Sun, 15 May 2016 09:52:42 -0700
> > >
> > > > Changes since v10
> > > >
> > > > 1) add module params: send_ring_page, recv_ring_page. They can be used
> > to
> > > > enlarge the ringbuffer size to get better performance, e.g.,
> > > > # modprobe hv_sock  recv_ring_page=16 send_ring_page=16
> > > > By default, recv_ring_page is 3 and send_ring_page is 2.
> > > >
> > > > 2) add module param max_socket_number (the default is 1024).
> > > > A user can enlarge the number to create more than 1024 hv_sock sockets.
> > > > By default, 1024 sockets take about 1024 * (3+2+1+1) * 4KB = 28M bytes.
> > > > (Here 1+1 means 1 page for send/recv buffers per connection, respectively.)
> > >
> > > This is papering around my objections, and create module parameters which
> > > I am fundamentally against.
> > >
> > > You're making the facility unusable by default, just to work around my
> > > memory consumption concerns.
> > >
> > > What will end up happening is that everyone will simply increase the
> > > values.
> > >
> > > You're not really addressing the core issue, and I will be ignoring you
> > > future submissions of this change until you do.
> > 
> > David,
> > I am sorry I came across as ignoring your feedback; that was not my intention.
> > The current host side design for this feature is such that each socket connection
> > needs its own channel, which consists of
> > 
> > 1.    A ring buffer for host to guest communication
> > 2.    A ring buffer for guest to host communication
> > 
> > The memory for the ring buffers has to be pinned down as this will be accessed
> > both from interrupt level in Linux guest and from the host OS at any time.
> > 
> > To address your concerns, I am planning to re-implement both the receive path
> > and the send path so that no additional pinned memory will be needed.
> > 
> > Receive Path:
> > When the application does a read on the socket, we will dynamically allocate
> > the buffer and perform the read operation on the incoming ring buffer. Since
> > we will be in the process context, we can sleep here and will set the
> > "GFP_KERNEL | __GFP_NOFAIL" flags. This buffer will be freed once the
> > application consumes all the data.
> > 
> > Send Path:
> > On the send side, we will construct the payload to be sent directly on the
> > outgoing ringbuffer.
> > 
> > So, with these changes, the only memory that will be pinned down will be the
> > memory for the ring buffers on a per-connection basis and this memory will be
> > pinned down until the connection is torn down.
> > 
> > Please let me know if this addresses your concerns.
> > 
> > -- Dexuan
> 
> Hi David,
> Ping. Really appreciate your comment.

Don't wait for people to respond to random design questions, go work on
the code and figure out if it is workable or not yourself.  Then post
patches.  We aren't responsible for your work, you are.

greg k-h

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)
  2016-05-19  0:59     ` Dexuan Cui
  2016-05-19  1:05       ` gregkh
@ 2016-05-19  4:12       ` David Miller
  2016-05-19  5:41         ` Dexuan Cui
  1 sibling, 1 reply; 7+ messages in thread
From: David Miller @ 2016-05-19  4:12 UTC (permalink / raw)
  To: decui
  Cc: kys, olaf, gregkh, jasowang, linux-kernel, joe, netdev, apw,
	devel, haiyangz


I'm travelling and very busy with the merge window.  So sorry I won't be able
to think about this for some time.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)
  2016-05-19  4:12       ` David Miller
@ 2016-05-19  5:41         ` Dexuan Cui
  0 siblings, 0 replies; 7+ messages in thread
From: Dexuan Cui @ 2016-05-19  5:41 UTC (permalink / raw)
  To: David Miller
  Cc: KY Srinivasan, olaf, gregkh, jasowang, linux-kernel, joe, netdev,
	apw, devel, Haiyang Zhang

> From: David Miller [mailto:davem@davemloft.net]
> Sent: Thursday, May 19, 2016 12:13
> To: Dexuan Cui <decui@microsoft.com>
> Cc: KY Srinivasan <kys@microsoft.com>; olaf@aepfle.de;
> gregkh@linuxfoundation.org; jasowang@redhat.com; linux-
> kernel@vger.kernel.org; joe@perches.com; netdev@vger.kernel.org;
> apw@canonical.com; devel@linuxdriverproject.org; Haiyang Zhang
> <haiyangz@microsoft.com>
> Subject: Re: [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)
> 
> 
> I'm travelling and very busy with the merge window.  So sorry I won't be able
> to think about this for some time.

David, 
Sure, I understand.

Please let me recap my last mail:

1)  I'll replace my statically-allocated per-connection "send/recv bufs" with
dynamically ones, so no buf is used when there is no traffic.

2) Another kind of bufs i.e., the  multi-page "VMBus send/recv ringbuffer", is
a must IMO due to the host side's design of the feature: every connection needs
its own ringbuffer, which takes several pages (2~3 pages at least. And, 5 pages
should suffice for good performance). The ringbuffer can be accessed by the
host at any time, so IMO the pages can't be swappable.

I understand net-next is closed now. I'm going to post the next version
after 4.7-rc1 is out in several weeks.

If you could give me some suggestions, I would be definitely happy to take.

Thanks!
-- Dexuan

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-05-19  6:16 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-15 16:52 [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock) Dexuan Cui
2016-05-15 17:16 ` David Miller
2016-05-17  2:45   ` Dexuan Cui
2016-05-19  0:59     ` Dexuan Cui
2016-05-19  1:05       ` gregkh
2016-05-19  4:12       ` David Miller
2016-05-19  5:41         ` Dexuan Cui

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).