From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDB5DC4727C for ; Thu, 9 Sep 2021 12:55:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C03506138E for ; Thu, 9 Sep 2021 12:55:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356609AbhIIMzc (ORCPT ); Thu, 9 Sep 2021 08:55:32 -0400 Received: from mail.kernel.org ([198.145.29.99]:57490 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355154AbhIIMlo (ORCPT ); Thu, 9 Sep 2021 08:41:44 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 3DAF161BF6; Thu, 9 Sep 2021 11:55:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1631188515; bh=KLdNNpiGulrcPlS+HcqvOs4Ctr1qIKhW0Dvj0QE4AhE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GSSbrA/vI90fqWdJMX0cyv+LN97gNjsv4W4lvghox6ripSCm3MdVrQGWPAvOfo8xi HDwP9efODduDQLwT3AhXQYg9A5KiRJzpIBRC9OBncT6HUy9tXz+WYtc9TvzObZYNLU uw2nyhCNwlOJh2pRo+iCkwRlHKFxX0vCh2AWq1VzKEKKUJGnxqX8K/Xi+u4JTkKgkL QVCofriSizlMfBYvecV2BXNIxP4Cd99bEOBMgS53LSWdcxo49u4LKfqs5M+cbW/BIT ALebPom0+TuMkGQHgHOHJCuKBYYNZ7XFBiHaG0ivQyO2zfbLCLGnFa0FBML9IjVORp ZxTjBVPR4ukzA== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Xin Long , Jon Maloy , "David S . Miller" , Sasha Levin , netdev@vger.kernel.org, tipc-discussion@lists.sourceforge.net Subject: [PATCH AUTOSEL 5.4 006/109] tipc: keep the skb in rcv queue until the whole data is read Date: Thu, 9 Sep 2021 07:53:23 -0400 Message-Id: <20210909115507.147917-6-sashal@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210909115507.147917-1-sashal@kernel.org> References: <20210909115507.147917-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Xin Long [ Upstream commit f4919ff59c2828064b4156e3c3600a169909bcf4 ] Currently, when userspace reads a datagram with a buffer that is smaller than this datagram, the data will be truncated and only part of it can be received by users. It doesn't seem right that users don't know the datagram size and have to use a huge buffer to read it to avoid the truncation. This patch to fix it by keeping the skb in rcv queue until the whole data is read by users. Only the last msg of the datagram will be marked with MSG_EOR, just as TCP/SCTP does. Note that this will work as above only when MSG_EOR is set in the flags parameter of recvmsg(), so that it won't break any old user applications. Signed-off-by: Xin Long Acked-by: Jon Maloy Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/tipc/socket.c | 36 +++++++++++++++++++++++++++--------- 1 file changed, 27 insertions(+), 9 deletions(-) diff --git a/net/tipc/socket.c b/net/tipc/socket.c index a5922ce9109c..231f9e1bf6bb 100644 --- a/net/tipc/socket.c +++ b/net/tipc/socket.c @@ -1756,6 +1756,7 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m, bool connected = !tipc_sk_type_connectionless(sk); struct tipc_sock *tsk = tipc_sk(sk); int rc, err, hlen, dlen, copy; + struct tipc_skb_cb *skb_cb; struct sk_buff_head xmitq; struct tipc_msg *hdr; struct sk_buff *skb; @@ -1779,6 +1780,7 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m, if (unlikely(rc)) goto exit; skb = skb_peek(&sk->sk_receive_queue); + skb_cb = TIPC_SKB_CB(skb); hdr = buf_msg(skb); dlen = msg_data_sz(hdr); hlen = msg_hdr_sz(hdr); @@ -1798,18 +1800,33 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m, /* Capture data if non-error msg, otherwise just set return value */ if (likely(!err)) { - copy = min_t(int, dlen, buflen); - if (unlikely(copy != dlen)) - m->msg_flags |= MSG_TRUNC; - rc = skb_copy_datagram_msg(skb, hlen, m, copy); + int offset = skb_cb->bytes_read; + + copy = min_t(int, dlen - offset, buflen); + rc = skb_copy_datagram_msg(skb, hlen + offset, m, copy); + if (unlikely(rc)) + goto exit; + if (unlikely(offset + copy < dlen)) { + if (flags & MSG_EOR) { + if (!(flags & MSG_PEEK)) + skb_cb->bytes_read = offset + copy; + } else { + m->msg_flags |= MSG_TRUNC; + skb_cb->bytes_read = 0; + } + } else { + if (flags & MSG_EOR) + m->msg_flags |= MSG_EOR; + skb_cb->bytes_read = 0; + } } else { copy = 0; rc = 0; - if (err != TIPC_CONN_SHUTDOWN && connected && !m->msg_control) + if (err != TIPC_CONN_SHUTDOWN && connected && !m->msg_control) { rc = -ECONNRESET; + goto exit; + } } - if (unlikely(rc)) - goto exit; /* Mark message as group event if applicable */ if (unlikely(grp_evt)) { @@ -1832,9 +1849,10 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m, tipc_node_distr_xmit(sock_net(sk), &xmitq); } - tsk_advance_rx_queue(sk); + if (!skb_cb->bytes_read) + tsk_advance_rx_queue(sk); - if (likely(!connected)) + if (likely(!connected) || skb_cb->bytes_read) goto exit; /* Send connection flow control advertisement when applicable */ -- 2.30.2