linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Steve Dickson <SteveD@RedHat.com>
To: Doug Nazar <nazard@nazar.ca>, libtirpc-devel@lists.sourceforge.net
Cc: linux-nfs@vger.kernel.org
Subject: Re: [PATCH 2/5] svc: Batch allocations of pollfds
Date: Wed, 29 Jul 2020 10:20:16 -0400	[thread overview]
Message-ID: <fc8ebe17-7e16-6be5-2323-6111a393510e@RedHat.com> (raw)
In-Reply-To: <20200722053445.27987-3-nazard@nazar.ca>



On 7/22/20 1:34 AM, Doug Nazar wrote:
> Signed-off-by: Doug Nazar <nazard@nazar.ca>
> ---
>  src/svc.c | 16 +++++++++-------
>  1 file changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/src/svc.c b/src/svc.c
> index 6db164b..57f7ba3 100644
> --- a/src/svc.c
> +++ b/src/svc.c
> @@ -54,6 +54,7 @@
>  #include "rpc_com.h"
>  
>  #define	RQCRED_SIZE	400	/* this size is excessive */
> +#define	SVC_POLLFD_INCREMENT	16
>  
>  #define max(a, b) (a > b ? a : b)
>  
> @@ -107,6 +108,7 @@ xprt_register (xprt)
>    if (sock < _rpc_dtablesize())
>      {
>        int i;
> +      size_t size;
>        struct pollfd *new_svc_pollfd;
>  
>        __svc_xports[sock] = xprt;
> @@ -126,17 +128,17 @@ xprt_register (xprt)
>              goto unlock;
>            }
>  
> -      new_svc_pollfd = (struct pollfd *) realloc (svc_pollfd,
> -                                                  sizeof (struct pollfd)
> -                                                  * (svc_max_pollfd + 1));
> +      size = sizeof (struct pollfd) * (svc_max_pollfd + SVC_POLLFD_INCREMENT);
> +      new_svc_pollfd = (struct pollfd *) realloc (svc_pollfd, size);
>        if (new_svc_pollfd == NULL) /* Out of memory */
>          goto unlock;
>        svc_pollfd = new_svc_pollfd;
> -      ++svc_max_pollfd;
> +      svc_max_pollfd += SVC_POLLFD_INCREMENT;
>  
> -      svc_pollfd[svc_max_pollfd - 1].fd = sock;
> -      svc_pollfd[svc_max_pollfd - 1].events = (POLLIN | POLLPRI |
> -                                               POLLRDNORM | POLLRDBAND);
> +      svc_pollfd[i].fd = sock;
> +      svc_pollfd[i].events = (POLLIN | POLLPRI | POLLRDNORM | POLLRDBAND);
> +      for (++i; i < svc_max_pollfd; ++i)
> +        svc_pollfd[i].fd = -1;
>      }
>  unlock:
>    rwlock_unlock (&svc_fd_lock);
> 
Just curious as why batch allocations are need? What problem does it solve?

steved.


  reply	other threads:[~2020-07-29 14:20 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-22  5:34 [PATCH 0/5] libtirpc patches Doug Nazar
2020-07-22  5:34 ` [PATCH 1/5] svc_dg: Free xp_netid during destroy Doug Nazar
2020-07-22  5:34 ` [PATCH 2/5] svc: Batch allocations of pollfds Doug Nazar
2020-07-29 14:20   ` Steve Dickson [this message]
2020-07-22  5:34 ` [PATCH 3/5] Add destructor functions to cleanup static resources on exit Doug Nazar
2020-07-22  5:34 ` [PATCH 4/5] Add ability to detect if we're on the main thread Doug Nazar
2020-07-29 14:27   ` Steve Dickson
2020-07-22  5:34 ` [PATCH 5/5] Use static object on main thread, instead of thread specific data Doug Nazar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fc8ebe17-7e16-6be5-2323-6111a393510e@RedHat.com \
    --to=steved@redhat.com \
    --cc=libtirpc-devel@lists.sourceforge.net \
    --cc=linux-nfs@vger.kernel.org \
    --cc=nazard@nazar.ca \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).