kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stefano Garzarella <sgarzare@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"David S. Miller" <davem@davemloft.net>,
	virtualization@lists.linux-foundation.org,
	Jason Wang <jasowang@redhat.com>,
	kvm@vger.kernel.org
Subject: Re: [PATCH v4 1/5] vsock/virtio: limit the memory used per-socket
Date: Tue, 30 Jul 2019 11:35:39 +0200	[thread overview]
Message-ID: <20190730093539.dcksure3vrykir3g@steredhat> (raw)
In-Reply-To: <20190729143622-mutt-send-email-mst@kernel.org>

On Mon, Jul 29, 2019 at 03:10:15PM -0400, Michael S. Tsirkin wrote:
> On Mon, Jul 29, 2019 at 06:50:56PM +0200, Stefano Garzarella wrote:
> > On Mon, Jul 29, 2019 at 06:19:03PM +0200, Stefano Garzarella wrote:
> > > On Mon, Jul 29, 2019 at 11:49:02AM -0400, Michael S. Tsirkin wrote:
> > > > On Mon, Jul 29, 2019 at 05:36:56PM +0200, Stefano Garzarella wrote:
> > > > > On Mon, Jul 29, 2019 at 10:04:29AM -0400, Michael S. Tsirkin wrote:
> > > > > > On Wed, Jul 17, 2019 at 01:30:26PM +0200, Stefano Garzarella wrote:
> > > > > > > Since virtio-vsock was introduced, the buffers filled by the host
> > > > > > > and pushed to the guest using the vring, are directly queued in
> > > > > > > a per-socket list. These buffers are preallocated by the guest
> > > > > > > with a fixed size (4 KB).
> > > > > > > 
> > > > > > > The maximum amount of memory used by each socket should be
> > > > > > > controlled by the credit mechanism.
> > > > > > > The default credit available per-socket is 256 KB, but if we use
> > > > > > > only 1 byte per packet, the guest can queue up to 262144 of 4 KB
> > > > > > > buffers, using up to 1 GB of memory per-socket. In addition, the
> > > > > > > guest will continue to fill the vring with new 4 KB free buffers
> > > > > > > to avoid starvation of other sockets.
> > > > > > > 
> > > > > > > This patch mitigates this issue copying the payload of small
> > > > > > > packets (< 128 bytes) into the buffer of last packet queued, in
> > > > > > > order to avoid wasting memory.
> > > > > > > 
> > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > > > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
> > > > > > 
> > > > > > This is good enough for net-next, but for net I think we
> > > > > > should figure out how to address the issue completely.
> > > > > > Can we make the accounting precise? What happens to
> > > > > > performance if we do?
> > > > > > 
> > > > > 
> > > > > In order to do more precise accounting maybe we can use the buffer size,
> > > > > instead of payload size when we update the credit available.
> > > > > In this way, the credit available for each socket will reflect the memory
> > > > > actually used.
> > > > > 
> > > > > I should check better, because I'm not sure what happen if the peer sees
> > > > > 1KB of space available, then it sends 1KB of payload (using a 4KB
> > > > > buffer).
> > > > > 
> > > > > The other option is to copy each packet in a new buffer like I did in
> > > > > the v2 [2], but this forces us to make a copy for each packet that does
> > > > > not fill the entire buffer, perhaps too expensive.
> > > > > 
> > > > > [2] https://patchwork.kernel.org/patch/10938741/
> > > > > 
> > > > > 
> > > > > Thanks,
> > > > > Stefano
> > > > 
> > > > Interesting. You are right, and at some level the protocol forces copies.
> > > > 
> > > > We could try to detect that the actual memory is getting close to
> > > > admin limits and force copies on queued packets after the fact.
> > > > Is that practical?
> > > 
> > > Yes, I think it is doable!
> > > We can decrease the credit available with the buffer size queued, and
> > > when the buffer size of packet to queue is bigger than the credit
> > > available, we can copy it.
> > > 
> > > > 
> > > > And yes we can extend the credit accounting to include buffer size.
> > > > That's a protocol change but maybe it makes sense.
> > > 
> > > Since we send to the other peer the credit available, maybe this
> > > change can be backwards compatible (I'll check better this).
> > 
> > What I said was wrong.
> > 
> > We send a counter (increased when the user consumes the packets) and the
> > "buf_alloc" (the max memory allowed) to the other peer.
> > It makes a difference between a local counter (increased when the
> > packets are sent) and the remote counter to calculate the credit available:
> > 
> >     u32 virtio_transport_get_credit(struct virtio_vsock_sock *vvs, u32 credit)
> >     {
> >     	u32 ret;
> > 
> >     	spin_lock_bh(&vvs->tx_lock);
> >     	ret = vvs->peer_buf_alloc - (vvs->tx_cnt - vvs->peer_fwd_cnt);
> >     	if (ret > credit)
> >     		ret = credit;
> >     	vvs->tx_cnt += ret;
> >     	spin_unlock_bh(&vvs->tx_lock);
> > 
> >     	return ret;
> >     }
> > 
> > Maybe I can play with "buf_alloc" to take care of bytes queued but not
> > used.
> > 
> > Thanks,
> > Stefano
> 
> Right. And the idea behind it all was that if we send a credit
> to remote then we have space for it.

Yes.

> I think the basic idea was that if we have actual allocated
> memory and can copy data there, then we send the credit to
> remote.
> 
> Of course that means an extra copy every packet.
> So as an optimization, it seems that we just assume
> that we will be able to allocate a new buffer.

Yes, we refill the virtqueue when half of the buffers were used.

> 
> First this is not the best we can do. We can actually do
> allocate memory in the socket before sending credit.

In this case, IIUC we should allocate an entire buffer (4KB),
so we can reuse it if the packet is big.

> If packet is small then we copy it there.
> If packet is big then we queue the packet,
> take the buffer out of socket and add it to the virtqueue.
> 
> Second question is what to do about medium sized packets.
> Packet is 1K but buffer is 4K, what do we do?
> And here I wonder - why don't we add the 3K buffer
> to the vq?

This would allow us to have an accurate credit account.

The problem here is the compatibility. Before this series virtio-vsock
and vhost-vsock modules had the RX buffer size hard-coded
(VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE = 4K). So, if we send a buffer smaller
of 4K, there might be issues.

Maybe it is the time to add add 'features' to virtio-vsock device.

Thanks,
Stefano

  reply	other threads:[~2019-07-30  9:35 UTC|newest]

Thread overview: 69+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-17 11:30 [PATCH v4 0/5] vsock/virtio: optimizations to increase the throughput Stefano Garzarella
2019-07-17 11:30 ` [PATCH v4 1/5] vsock/virtio: limit the memory used per-socket Stefano Garzarella
2019-07-29 14:04   ` Michael S. Tsirkin
2019-07-29 15:36     ` Stefano Garzarella
2019-07-29 15:49       ` Michael S. Tsirkin
2019-07-29 16:19         ` Stefano Garzarella
2019-07-29 16:50           ` Stefano Garzarella
2019-07-29 19:10             ` Michael S. Tsirkin
2019-07-30  9:35               ` Stefano Garzarella [this message]
2019-07-30 20:42                 ` Michael S. Tsirkin
2019-08-01 10:47                   ` Stefano Garzarella
2019-08-01 13:21                     ` Michael S. Tsirkin
2019-08-01 13:36                       ` Stefano Garzarella
2019-09-01  8:26                         ` Michael S. Tsirkin
2019-09-01 10:17                           ` Michael S. Tsirkin
2019-09-02  9:57                             ` Stefano Garzarella
2019-09-02 15:23                               ` Michael S. Tsirkin
2019-09-03  4:39                               ` Michael S. Tsirkin
2019-09-03  7:45                                 ` Stefano Garzarella
2019-09-03  7:52                                   ` Michael S. Tsirkin
2019-09-03  8:00                                     ` Stefano Garzarella
2019-07-29 16:01       ` Michael S. Tsirkin
2019-07-29 16:41         ` Stefano Garzarella
2019-07-29 19:33           ` Michael S. Tsirkin
2019-08-30  9:40     ` Stefano Garzarella
2019-09-01  6:56       ` Michael S. Tsirkin
2019-09-02  8:39         ` Stefan Hajnoczi
2019-09-02  8:55           ` Stefano Garzarella
2019-10-11 13:40         ` Stefano Garzarella
2019-10-11 14:11           ` Michael S. Tsirkin
2019-10-11 14:23             ` Stefano Garzarella
2019-10-14  8:17           ` Stefan Hajnoczi
2019-10-14  8:21             ` Jason Wang
2019-10-14  8:38               ` Stefano Garzarella
2019-07-17 11:30 ` [PATCH v4 2/5] vsock/virtio: reduce credit update messages Stefano Garzarella
2019-07-22  8:36   ` Stefan Hajnoczi
2019-09-03  4:38   ` Michael S. Tsirkin
2019-09-03  7:31     ` Stefano Garzarella
2019-09-03  7:38       ` Michael S. Tsirkin
2019-07-17 11:30 ` [PATCH v4 3/5] vsock/virtio: fix locking in virtio_transport_inc_tx_pkt() Stefano Garzarella
2019-07-17 14:51   ` Michael S. Tsirkin
2019-07-18  7:43     ` Stefano Garzarella
2019-07-22  8:53   ` Stefan Hajnoczi
2019-07-17 11:30 ` [PATCH v4 4/5] vhost/vsock: split packets to send using multiple buffers Stefano Garzarella
2019-07-17 14:54   ` Michael S. Tsirkin
2019-07-18  7:50     ` Stefano Garzarella
2019-07-18  8:13       ` Michael S. Tsirkin
2019-07-18  9:37         ` Stefano Garzarella
2019-07-18 11:35           ` Michael S. Tsirkin
2019-07-19  8:08             ` Stefano Garzarella
2019-07-19  8:21               ` Jason Wang
2019-07-19  8:39                 ` Stefano Garzarella
2019-07-19  8:51                   ` Jason Wang
2019-07-19  9:20                     ` Stefano Garzarella
2019-07-22  9:06   ` Stefan Hajnoczi
2019-07-17 11:30 ` [PATCH v4 5/5] vsock/virtio: change the maximum packet size allowed Stefano Garzarella
2019-07-17 14:59   ` Michael S. Tsirkin
2019-07-18  7:52     ` Stefano Garzarella
2019-07-18 12:33       ` Michael S. Tsirkin
2019-07-19  8:29         ` Stefano Garzarella
2019-07-22  9:07   ` Stefan Hajnoczi
2019-07-22  9:08 ` [PATCH v4 0/5] vsock/virtio: optimizations to increase the throughput Stefan Hajnoczi
2019-07-22  9:14   ` Stefano Garzarella
2019-07-29 13:12     ` Stefan Hajnoczi
2019-07-29 13:59 ` Michael S. Tsirkin
2019-07-30  9:40   ` Stefano Garzarella
2019-07-30 10:03     ` Jason Wang
2019-07-30 15:38       ` Stefano Garzarella
2019-09-03  8:02 ` request for stable (was Re: [PATCH v4 0/5] vsock/virtio: optimizations to increase the throughput) Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190730093539.dcksure3vrykir3g@steredhat \
    --to=sgarzare@redhat.com \
    --cc=davem@davemloft.net \
    --cc=jasowang@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=stefanha@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).