From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753566AbbK2WH1 (ORCPT ); Sun, 29 Nov 2015 17:07:27 -0500 Received: from wtarreau.pck.nerim.net ([62.212.114.60]:56686 "EHLO 1wt.eu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753016AbbK2WHY (ORCPT ); Sun, 29 Nov 2015 17:07:24 -0500 Message-Id: <20151129214703.344164051@1wt.eu> User-Agent: quilt/0.63-1 Date: Sun, 29 Nov 2015 22:47:13 +0100 From: Willy Tarreau To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Konstantin Khlebnikov , Herbert Xu , "David S. Miller" , Ben Hutchings , Willy Tarreau Subject: [PATCH 2.6.32 11/38] [PATCH 11/38] net: Clone skb before setting peeked flag MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 In-Reply-To: <8acf8256ccc72771a80b7851061027bc@local> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 2.6.32-longterm review patch. If anyone has any objections, please let me know. ------------------ commit 738ac1ebb96d02e0d23bc320302a6ea94c612dec upstream. Shared skbs must not be modified and this is crucial for broadcast and/or multicast paths where we use it as an optimisation to avoid unnecessary cloning. The function skb_recv_datagram breaks this rule by setting peeked without cloning the skb first. This causes funky races which leads to double-free. This patch fixes this by cloning the skb and replacing the skb in the list when setting skb->peeked. Fixes: a59322be07c9 ("[UDP]: Only increment counter on first peek/recv") Reported-by: Konstantin Khlebnikov Signed-off-by: Herbert Xu Signed-off-by: David S. Miller [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings (cherry picked from commit 72e6f0680249f5e0a87f2b282d033baefd90d84e) [wt: adjusted context for 2.6.32. Introduces a bug, see next commit] Signed-off-by: Willy Tarreau --- net/core/datagram.c | 40 +++++++++++++++++++++++++++++++++++++--- 1 file changed, 37 insertions(+), 3 deletions(-) diff --git a/net/core/datagram.c b/net/core/datagram.c index 4ade301..cbb3100 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -127,6 +127,35 @@ out_noerr: goto out; } +static int skb_set_peeked(struct sk_buff *skb) +{ + struct sk_buff *nskb; + + if (skb->peeked) + return 0; + + /* We have to unshare an skb before modifying it. */ + if (!skb_shared(skb)) + goto done; + + nskb = skb_clone(skb, GFP_ATOMIC); + if (!nskb) + return -ENOMEM; + + skb->prev->next = nskb; + skb->next->prev = nskb; + nskb->prev = skb->prev; + nskb->next = skb->next; + + consume_skb(skb); + skb = nskb; + +done: + skb->peeked = 1; + + return 0; +} + /** * __skb_recv_datagram - Receive a datagram skbuff * @sk: socket @@ -160,6 +189,7 @@ struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned flags, int *peeked, int *err) { struct sk_buff *skb; + unsigned long cpu_flags; long timeo; /* * Caller is allowed not to check sk->sk_err before skb_recv_datagram() @@ -178,14 +208,16 @@ struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned flags, * Look at current nfs client by the way... * However, this function was corrent in any case. 8) */ - unsigned long cpu_flags; - spin_lock_irqsave(&sk->sk_receive_queue.lock, cpu_flags); skb = skb_peek(&sk->sk_receive_queue); if (skb) { *peeked = skb->peeked; if (flags & MSG_PEEK) { - skb->peeked = 1; + + error = skb_set_peeked(skb); + if (error) + goto unlock_err; + atomic_inc(&skb->users); } else __skb_unlink(skb, &sk->sk_receive_queue); @@ -204,6 +236,8 @@ struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned flags, return NULL; +unlock_err: + spin_unlock_irqrestore(&sk->sk_receive_queue.lock, cpu_flags); no_packet: *err = error; return NULL; -- 1.7.12.2.21.g234cd45.dirty