From: Haakon Bugge <haakon.bugge@oracle.com>
To: Pavel Skripkin <paskripkin@gmail.com>
Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com>,
"davem@davemloft.net" <davem@davemloft.net>,
"kuba@kernel.org" <kuba@kernel.org>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
OFED mailing list <linux-rdma@vger.kernel.org>,
"rds-devel@oss.oracle.com" <rds-devel@oss.oracle.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"syzbot+5134cdf021c4ed5aaa5f@syzkaller.appspotmail.com"
<syzbot+5134cdf021c4ed5aaa5f@syzkaller.appspotmail.com>
Subject: Re: [PATCH v2] net: rds: fix memory leak in rds_recvmsg
Date: Tue, 8 Jun 2021 12:29:56 +0000 [thread overview]
Message-ID: <3DEDB1BE-C48D-4B64-AB4B-B9C6D9505FC4@oracle.com> (raw)
In-Reply-To: <20210608080641.16543-1-paskripkin@gmail.com>
> On 8 Jun 2021, at 10:06, Pavel Skripkin <paskripkin@gmail.com> wrote:
>
> Syzbot reported memory leak in rds. The problem
> was in unputted refcount in case of error.
>
> int rds_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
> int msg_flags)
> {
> ...
>
> if (!rds_next_incoming(rs, &inc)) {
> ...
> }
>
> After this "if" inc refcount incremented and
>
> if (rds_cmsg_recv(inc, msg, rs)) {
> ret = -EFAULT;
> goto out;
> }
> ...
> out:
> return ret;
> }
>
> in case of rds_cmsg_recv() fail the refcount won't be
> decremented. And it's easy to see from ftrace log, that
> rds_inc_addref() don't have rds_inc_put() pair in
> rds_recvmsg() after rds_cmsg_recv()
>
> 1) | rds_recvmsg() {
> 1) 3.721 us | rds_inc_addref();
> 1) 3.853 us | rds_message_inc_copy_to_user();
> 1) + 10.395 us | rds_cmsg_recv();
> 1) + 34.260 us | }
>
> Fixes: bdbe6fbc6a2f ("RDS: recv.c")
> Reported-and-tested-by: syzbot+5134cdf021c4ed5aaa5f@syzkaller.appspotmail.com
> Signed-off-by: Pavel Skripkin <paskripkin@gmail.com>
Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com>
Thanks for fixing this, Håkon
> ---
>
> Changes in v2:
> Changed goto to break.
>
> ---
> net/rds/recv.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/net/rds/recv.c b/net/rds/recv.c
> index 4db109fb6ec2..5b426dc3634d 100644
> --- a/net/rds/recv.c
> +++ b/net/rds/recv.c
> @@ -714,7 +714,7 @@ int rds_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
>
> if (rds_cmsg_recv(inc, msg, rs)) {
> ret = -EFAULT;
> - goto out;
> + break;
> }
> rds_recvmsg_zcookie(rs, msg);
>
> --
> 2.31.1
>
next prev parent reply other threads:[~2021-06-08 12:30 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-07 19:41 [PATCH] net: rds: fix memory leak in rds_recvmsg Pavel Skripkin
2021-06-08 7:11 ` Haakon Bugge
2021-06-08 8:00 ` Pavel Skripkin
2021-06-08 8:06 ` [PATCH v2] " Pavel Skripkin
2021-06-08 12:29 ` Haakon Bugge [this message]
2021-06-08 14:41 ` Santosh Shilimkar
2021-06-08 23:40 ` patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3DEDB1BE-C48D-4B64-AB4B-B9C6D9505FC4@oracle.com \
--to=haakon.bugge@oracle.com \
--cc=davem@davemloft.net \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=paskripkin@gmail.com \
--cc=rds-devel@oss.oracle.com \
--cc=santosh.shilimkar@oracle.com \
--cc=syzbot+5134cdf021c4ed5aaa5f@syzkaller.appspotmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).