All of lore.kernel.org
 help / color / mirror / Atom feed
From: Eric Dumazet <eric.dumazet@gmail.com>
To: Hangbin Liu <liuhangbin@gmail.com>, netdev@vger.kernel.org
Cc: Stephen Hemminger <stephen@networkplumber.org>,
	Michal Kubecek <mkubecek@suse.cz>, Phil Sutter <phil@nwl.cc>
Subject: Re: [PATCHv5 iproute2 net-next 2/2] lib/libnetlink: re malloc buff if size is not enough
Date: Tue, 12 Feb 2019 15:43:24 -0800	[thread overview]
Message-ID: <4e327b7c-fc1f-ea3a-ba2a-54d81493ba62@gmail.com> (raw)
In-Reply-To: <244174ae-3ab4-68c0-6783-f8c91840a7e1@gmail.com>



On 02/12/2019 03:32 PM, Eric Dumazet wrote:
> 

> 
> This patch brings a serious performance penalty.
> 
> ss command now uses two system calls per ~4KB worth of data
> 
> recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{NULL, 0}], msg_controllen=0, msg_flags=MSG_TRUNC}, MSG_PEEK|MSG_TRUNC) = 3328 <0.000120>
> recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"h\0\0\0\24\0\2\0@\342\1\0\322\0\6\0\n\1\1\0\250\253\276@&\7\370\260\200\231\16\6"..., 3328}], msg_controllen=0, msg_flags=0}, 0) = 3328 <0.000108>
> recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{NULL, 0}], msg_controllen=0, msg_flags=MSG_TRUNC}, MSG_PEEK|MSG_TRUNC) = 3328 <0.000086>
> recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"h\0\0\0\24\0\2\0@\342\1\0\322\0\6\0\n\10\2\0002A\266S&\7\370\260\200\231\16\6"..., 3328}], msg_controllen=0, msg_flags=0}, 0) = 3328 <0.000121>
> 
> 
> So we are back to a very pessimistic situation.
> 

I guess this patch will solve the issue :

diff --git a/lib/libnetlink.c b/lib/libnetlink.c
index ced33728777a17e0905e76acb904ac4709707488..309b5b3787e3d8f8c47f035d270ae2b4df01703e 100644
--- a/lib/libnetlink.c
+++ b/lib/libnetlink.c
@@ -442,6 +442,8 @@ static int rtnl_recvmsg(int fd, struct msghdr *msg, char **answer)
        if (len < 0)
                return len;
 
+       if (len < 32768)
+               len = 32768;
        buf = malloc(len);
        if (!buf) {
                fprintf(stderr, "malloc error: not enough buffer\n");

  reply	other threads:[~2019-02-12 23:43 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-26  1:41 [PATCHv5 iproute2 net-next 0/2] libnetlink: malloc correct buff at run time Hangbin Liu
2017-10-26  1:41 ` [PATCHv5 iproute2 net-next 2/2] lib/libnetlink: re malloc buff if size is not enough Hangbin Liu
2017-10-26  2:59   ` David Ahern
2017-10-26 10:24     ` Stephen Hemminger
2017-10-26 15:28       ` David Ahern
2017-10-26 15:33         ` Phil Sutter
2017-10-26 15:42           ` David Ahern
2017-10-26 18:31             ` Phil Sutter
2019-02-12 23:32   ` Eric Dumazet
2019-02-12 23:43     ` Eric Dumazet [this message]
2017-10-26  1:41 ` [PATCHv5 iproute2 net-next 2/2] lib/libnetlink: update rtnl_talk to support malloc buff at run time Hangbin Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4e327b7c-fc1f-ea3a-ba2a-54d81493ba62@gmail.com \
    --to=eric.dumazet@gmail.com \
    --cc=liuhangbin@gmail.com \
    --cc=mkubecek@suse.cz \
    --cc=netdev@vger.kernel.org \
    --cc=phil@nwl.cc \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.