All of lore.kernel.org
 help / color / mirror / Atom feed
From: Christian Borntraeger <borntraeger@de.ibm.com>
To: Zhou Jie <zhoujie2011@cn.fujitsu.com>, qemu-devel@nongnu.org
Cc: qemu-trivial@nongnu.org, mst@redhat.com
Subject: Re: [Qemu-devel] [PATCH] hw/net/virtio-net: Allocating Large sized arrays to heap
Date: Tue, 26 Apr 2016 10:49:15 +0200	[thread overview]
Message-ID: <571F2B8B.8030505@de.ibm.com> (raw)
In-Reply-To: <1461657924-1933-1-git-send-email-zhoujie2011@cn.fujitsu.com>

On 04/26/2016 10:05 AM, Zhou Jie wrote:
> virtio_net_flush_tx has a huge stack usage of 16392 bytes approx.
> Moving large arrays to heap to reduce stack usage.
> 
> Signed-off-by: Zhou Jie <zhoujie2011@cn.fujitsu.com>
> ---
>  hw/net/virtio-net.c | 15 +++++++++++----
>  1 file changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 5798f87..cab7bbc 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -1213,6 +1213,7 @@ static int32_t virtio_net_flush_tx(VirtIONetQueue *q)
>      VirtIONet *n = q->n;
>      VirtIODevice *vdev = VIRTIO_DEVICE(n);
>      VirtQueueElement *elem;
> +    struct iovec *sg = NULL, *sg2 = NULL;
>      int32_t num_packets = 0;
>      int queue_index = vq2q(virtio_get_queue_index(q->tx_vq));
>      if (!(vdev->status & VIRTIO_CONFIG_S_DRIVER_OK)) {
> @@ -1224,10 +1225,12 @@ static int32_t virtio_net_flush_tx(VirtIONetQueue *q)
>          return num_packets;
>      }
> 
> +    sg = g_new(struct iovec, VIRTQUEUE_MAX_SIZE);
> +    sg2 = g_new(struct iovec, VIRTQUEUE_MAX_SIZE + 1);


As I said in another mail, 16k is usually perfectly fine for a userspace
stack and doing allocations in a hot path might actually hurt performance.

Unless we have a real problem (e.g. very long call stack on a small thread 
stack) I would prefer to not change this.
Do you have seen a real problem due to this?



>      for (;;) {
>          ssize_t ret;
>          unsigned int out_num;
> -        struct iovec sg[VIRTQUEUE_MAX_SIZE], sg2[VIRTQUEUE_MAX_SIZE + 1], *out_sg;
> +        struct iovec *out_sg;
>          struct virtio_net_hdr_mrg_rxbuf mhdr;
> 
>          elem = virtqueue_pop(q->tx_vq, sizeof(VirtQueueElement));
> @@ -1252,7 +1255,7 @@ static int32_t virtio_net_flush_tx(VirtIONetQueue *q)
>                  virtio_net_hdr_swap(vdev, (void *) &mhdr);
>                  sg2[0].iov_base = &mhdr;
>                  sg2[0].iov_len = n->guest_hdr_len;
> -                out_num = iov_copy(&sg2[1], ARRAY_SIZE(sg2) - 1,
> +                out_num = iov_copy(&sg2[1], VIRTQUEUE_MAX_SIZE,
>                                     out_sg, out_num,
>                                     n->guest_hdr_len, -1);
>                  if (out_num == VIRTQUEUE_MAX_SIZE) {
> @@ -1269,10 +1272,10 @@ static int32_t virtio_net_flush_tx(VirtIONetQueue *q)
>           */
>          assert(n->host_hdr_len <= n->guest_hdr_len);
>          if (n->host_hdr_len != n->guest_hdr_len) {
> -            unsigned sg_num = iov_copy(sg, ARRAY_SIZE(sg),
> +            unsigned sg_num = iov_copy(sg, VIRTQUEUE_MAX_SIZE,
>                                         out_sg, out_num,
>                                         0, n->host_hdr_len);
> -            sg_num += iov_copy(sg + sg_num, ARRAY_SIZE(sg) - sg_num,
> +            sg_num += iov_copy(sg + sg_num, VIRTQUEUE_MAX_SIZE - sg_num,
>                               out_sg, out_num,
>                               n->guest_hdr_len, -1);
>              out_num = sg_num;
> @@ -1284,6 +1287,8 @@ static int32_t virtio_net_flush_tx(VirtIONetQueue *q)
>          if (ret == 0) {
>              virtio_queue_set_notification(q->tx_vq, 0);
>              q->async_tx.elem = elem;
> +            g_free(sg);
> +            g_free(sg2);
>              return -EBUSY;
>          }
> 
> @@ -1296,6 +1301,8 @@ drop:
>              break;
>          }
>      }
> +    g_free(sg);
> +    g_free(sg2);
>      return num_packets;
>  }
> 

  reply	other threads:[~2016-04-26  8:49 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-26  8:05 [Qemu-devel] [PATCH] hw/net/virtio-net: Allocating Large sized arrays to heap Zhou Jie
2016-04-26  8:49 ` Christian Borntraeger [this message]
2016-04-26  9:05   ` Zhou Jie
2016-04-26 12:42 ` Michael S. Tsirkin
2016-04-27  0:45   ` Zhou Jie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=571F2B8B.8030505@de.ibm.com \
    --to=borntraeger@de.ibm.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-trivial@nongnu.org \
    --cc=zhoujie2011@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.