From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,USER_AGENT_NEOMUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7647EC28CF6 for ; Wed, 1 Aug 2018 10:36:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3770B208A3 for ; Wed, 1 Aug 2018 10:36:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3770B208A3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=strlen.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388113AbeHAMVx (ORCPT ); Wed, 1 Aug 2018 08:21:53 -0400 Received: from Chamillionaire.breakpoint.cc ([146.0.238.67]:46770 "EHLO Chamillionaire.breakpoint.cc" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387559AbeHAMVx (ORCPT ); Wed, 1 Aug 2018 08:21:53 -0400 Received: from fw by Chamillionaire.breakpoint.cc with local (Exim 4.89) (envelope-from ) id 1fkoTh-00014g-5c; Wed, 01 Aug 2018 12:35:37 +0200 Date: Wed, 1 Aug 2018 12:35:37 +0200 From: Florian Westphal To: Dmitry Vyukov Cc: Linus Torvalds , Christoph Lameter , Andrey Ryabinin , Theodore Ts'o , Jan Kara , linux-ext4@vger.kernel.org, Greg Kroah-Hartman , Pablo Neira Ayuso , Jozsef Kadlecsik , Florian Westphal , David Miller , NetFilter , coreteam@netfilter.org, Network Development , Gerrit Renker , dccp@vger.kernel.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Dave Airlie , intel-gfx , DRI , Eric Dumazet , Alexey Kuznetsov , Hideaki YOSHIFUJI , Ursula Braun , linux-s390 , Linux Kernel Mailing List , Andrew Morton , linux-mm , Andrey Konovalov Subject: Re: SLAB_TYPESAFE_BY_RCU without constructors (was Re: [PATCH v4 13/17] khwasan: add hooks implementation) Message-ID: <20180801103537.d36t3snzulyuge7g@breakpoint.cc> References: <01000164f169bc6b-c73a8353-d7d9-47ec-a782-90aadcb86bfb-000000@email.amazonses.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Dmitry Vyukov wrote: > Still can't grasp all details. > There is state that we read without taking ct->ct_general.use ref > first, namely ct->state and what's used by nf_ct_key_equal. > So let's say the entry we want to find is in the list, but > ____nf_conntrack_find finds a wrong entry earlier because all state it > looks at is random garbage, so it returns the wrong entry to > __nf_conntrack_find_get. If an entry can be found, it can't be random garbage. We never link entries into global table until state has been set up. > Now (nf_ct_is_dying(ct) || !atomic_inc_not_zero(&ct->ct_general.use)) > check in __nf_conntrack_find_get passes, and it returns NULL to the > caller (which means entry is not present). So entry is going away or marked as dead which for us is same as 'not present', we need to allocate a new entry. > While in reality the entry > is present, but we were just looking at the wrong one. We never add tuples that are identical to the global table. If N cores receive identical packets at same time with no prior state, all will allocate a new conntrack, but we notice this when we try to insert the nf_conn entries into the table. Only one will succeed, other cpus have to cope with this. (worst case: all raced packets are dropped along with their conntrack object). For lookup, we have following scenarios: 1. It doesn't exist -> new allocation needed 2. It exists, not dead, has nonzero refount -> use it 3. It exists, but marked as dying -> new allocation needed 4. It exists but has 0 reference count -> new allocation needed 5. It exists, we get reference, but 2nd nf_ct_key_equal check fails. We saw a matching 'old incarnation' that just got re-used on other core. -> retry lookup > Also I am not sure about order of checks in (nf_ct_is_dying(ct) || > !atomic_inc_not_zero(&ct->ct_general.use)), because checking state > before taking the ref is only a best-effort hint, so it can actually > be a dying entry when we take a ref. Yes, it can also become a dying entry after we took the reference. > So shouldn't it read something like the following? > > rcu_read_lock(); > begin: > h = ____nf_conntrack_find(net, zone, tuple, hash); > if (h) { > ct = nf_ct_tuplehash_to_ctrack(h); > if (!atomic_inc_not_zero(&ct->ct_general.use)) > goto begin; > if (unlikely(nf_ct_is_dying(ct)) || > unlikely(!nf_ct_key_equal(h, tuple, zone, net))) { > nf_ct_put(ct); It would be ok to make this change, but dying bit can be set at any time e.g. because userspace tells kernel to flush the conntrack table. So refcount is always > 0 when the DYING bit is set. I don't see why it would be a problem. nf_conn struct will stay valid until all cpus have dropped references. The check in lookup function only serves to hide the known-to-go-away entry. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Florian Westphal Subject: Re: SLAB_TYPESAFE_BY_RCU without constructors (was Re: [PATCH v4 13/17] khwasan: add hooks implementation) Date: Wed, 1 Aug 2018 12:35:37 +0200 Message-ID: <20180801103537.d36t3snzulyuge7g@breakpoint.cc> References: <01000164f169bc6b-c73a8353-d7d9-47ec-a782-90aadcb86bfb-000000@email.amazonses.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Linus Torvalds , Christoph Lameter , Andrey Ryabinin , Theodore Ts'o , Jan Kara , linux-ext4@vger.kernel.org, Greg Kroah-Hartman , Pablo Neira Ayuso , Jozsef Kadlecsik , Florian Westphal , David Miller , NetFilter , coreteam@netfilter.org, Network Development , Gerrit Renker , dccp@vger.kernel.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Dave Airlie , To: Dmitry Vyukov Return-path: Received: from Chamillionaire.breakpoint.cc ([146.0.238.67]:46770 "EHLO Chamillionaire.breakpoint.cc" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387559AbeHAMVx (ORCPT ); Wed, 1 Aug 2018 08:21:53 -0400 Content-Disposition: inline In-Reply-To: Sender: netdev-owner@vger.kernel.org List-ID: Dmitry Vyukov wrote: > Still can't grasp all details. > There is state that we read without taking ct->ct_general.use ref > first, namely ct->state and what's used by nf_ct_key_equal. > So let's say the entry we want to find is in the list, but > ____nf_conntrack_find finds a wrong entry earlier because all state it > looks at is random garbage, so it returns the wrong entry to > __nf_conntrack_find_get. If an entry can be found, it can't be random garbage. We never link entries into global table until state has been set up. > Now (nf_ct_is_dying(ct) || !atomic_inc_not_zero(&ct->ct_general.use)) > check in __nf_conntrack_find_get passes, and it returns NULL to the > caller (which means entry is not present). So entry is going away or marked as dead which for us is same as 'not present', we need to allocate a new entry. > While in reality the entry > is present, but we were just looking at the wrong one. We never add tuples that are identical to the global table. If N cores receive identical packets at same time with no prior state, all will allocate a new conntrack, but we notice this when we try to insert the nf_conn entries into the table. Only one will succeed, other cpus have to cope with this. (worst case: all raced packets are dropped along with their conntrack object). For lookup, we have following scenarios: 1. It doesn't exist -> new allocation needed 2. It exists, not dead, has nonzero refount -> use it 3. It exists, but marked as dying -> new allocation needed 4. It exists but has 0 reference count -> new allocation needed 5. It exists, we get reference, but 2nd nf_ct_key_equal check fails. We saw a matching 'old incarnation' that just got re-used on other core. -> retry lookup > Also I am not sure about order of checks in (nf_ct_is_dying(ct) || > !atomic_inc_not_zero(&ct->ct_general.use)), because checking state > before taking the ref is only a best-effort hint, so it can actually > be a dying entry when we take a ref. Yes, it can also become a dying entry after we took the reference. > So shouldn't it read something like the following? > > rcu_read_lock(); > begin: > h = ____nf_conntrack_find(net, zone, tuple, hash); > if (h) { > ct = nf_ct_tuplehash_to_ctrack(h); > if (!atomic_inc_not_zero(&ct->ct_general.use)) > goto begin; > if (unlikely(nf_ct_is_dying(ct)) || > unlikely(!nf_ct_key_equal(h, tuple, zone, net))) { > nf_ct_put(ct); It would be ok to make this change, but dying bit can be set at any time e.g. because userspace tells kernel to flush the conntrack table. So refcount is always > 0 when the DYING bit is set. I don't see why it would be a problem. nf_conn struct will stay valid until all cpus have dropped references. The check in lookup function only serves to hide the known-to-go-away entry. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Florian Westphal Subject: Re: SLAB_TYPESAFE_BY_RCU without constructors (was Re: [PATCH v4 13/17] khwasan: add hooks implementation) Date: Wed, 1 Aug 2018 12:35:37 +0200 Message-ID: <20180801103537.d36t3snzulyuge7g@breakpoint.cc> References: <01000164f169bc6b-c73a8353-d7d9-47ec-a782-90aadcb86bfb-000000@email.amazonses.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: Sender: netdev-owner@vger.kernel.org To: Dmitry Vyukov Cc: Linus Torvalds , Christoph Lameter , Andrey Ryabinin , Theodore Ts'o , Jan Kara , linux-ext4@vger.kernel.org, Greg Kroah-Hartman , Pablo Neira Ayuso , Jozsef Kadlecsik , Florian Westphal , David Miller , NetFilter , coreteam@netfilter.org, Network Development , Gerrit Renker , dccp@vger.kernel.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Dave Airlie List-Id: dri-devel@lists.freedesktop.org Dmitry Vyukov wrote: > Still can't grasp all details. > There is state that we read without taking ct->ct_general.use ref > first, namely ct->state and what's used by nf_ct_key_equal. > So let's say the entry we want to find is in the list, but > ____nf_conntrack_find finds a wrong entry earlier because all state it > looks at is random garbage, so it returns the wrong entry to > __nf_conntrack_find_get. If an entry can be found, it can't be random garbage. We never link entries into global table until state has been set up. > Now (nf_ct_is_dying(ct) || !atomic_inc_not_zero(&ct->ct_general.use)) > check in __nf_conntrack_find_get passes, and it returns NULL to the > caller (which means entry is not present). So entry is going away or marked as dead which for us is same as 'not present', we need to allocate a new entry. > While in reality the entry > is present, but we were just looking at the wrong one. We never add tuples that are identical to the global table. If N cores receive identical packets at same time with no prior state, all will allocate a new conntrack, but we notice this when we try to insert the nf_conn entries into the table. Only one will succeed, other cpus have to cope with this. (worst case: all raced packets are dropped along with their conntrack object). For lookup, we have following scenarios: 1. It doesn't exist -> new allocation needed 2. It exists, not dead, has nonzero refount -> use it 3. It exists, but marked as dying -> new allocation needed 4. It exists but has 0 reference count -> new allocation needed 5. It exists, we get reference, but 2nd nf_ct_key_equal check fails. We saw a matching 'old incarnation' that just got re-used on other core. -> retry lookup > Also I am not sure about order of checks in (nf_ct_is_dying(ct) || > !atomic_inc_not_zero(&ct->ct_general.use)), because checking state > before taking the ref is only a best-effort hint, so it can actually > be a dying entry when we take a ref. Yes, it can also become a dying entry after we took the reference. > So shouldn't it read something like the following? > > rcu_read_lock(); > begin: > h = ____nf_conntrack_find(net, zone, tuple, hash); > if (h) { > ct = nf_ct_tuplehash_to_ctrack(h); > if (!atomic_inc_not_zero(&ct->ct_general.use)) > goto begin; > if (unlikely(nf_ct_is_dying(ct)) || > unlikely(!nf_ct_key_equal(h, tuple, zone, net))) { > nf_ct_put(ct); It would be ok to make this change, but dying bit can be set at any time e.g. because userspace tells kernel to flush the conntrack table. So refcount is always > 0 when the DYING bit is set. I don't see why it would be a problem. nf_conn struct will stay valid until all cpus have dropped references. The check in lookup function only serves to hide the known-to-go-away entry. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Florian Westphal Date: Wed, 01 Aug 2018 10:35:37 +0000 Subject: Re: SLAB_TYPESAFE_BY_RCU without constructors (was Re: [PATCH v4 13/17] khwasan: add hooks implement Message-Id: <20180801103537.d36t3snzulyuge7g@breakpoint.cc> List-Id: References: In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: dccp@vger.kernel.org Dmitry Vyukov wrote: > Still can't grasp all details. > There is state that we read without taking ct->ct_general.use ref > first, namely ct->state and what's used by nf_ct_key_equal. > So let's say the entry we want to find is in the list, but > ____nf_conntrack_find finds a wrong entry earlier because all state it > looks at is random garbage, so it returns the wrong entry to > __nf_conntrack_find_get. If an entry can be found, it can't be random garbage. We never link entries into global table until state has been set up. > Now (nf_ct_is_dying(ct) || !atomic_inc_not_zero(&ct->ct_general.use)) > check in __nf_conntrack_find_get passes, and it returns NULL to the > caller (which means entry is not present). So entry is going away or marked as dead which for us is same as 'not present', we need to allocate a new entry. > While in reality the entry > is present, but we were just looking at the wrong one. We never add tuples that are identical to the global table. If N cores receive identical packets at same time with no prior state, all will allocate a new conntrack, but we notice this when we try to insert the nf_conn entries into the table. Only one will succeed, other cpus have to cope with this. (worst case: all raced packets are dropped along with their conntrack object). For lookup, we have following scenarios: 1. It doesn't exist -> new allocation needed 2. It exists, not dead, has nonzero refount -> use it 3. It exists, but marked as dying -> new allocation needed 4. It exists but has 0 reference count -> new allocation needed 5. It exists, we get reference, but 2nd nf_ct_key_equal check fails. We saw a matching 'old incarnation' that just got re-used on other core. -> retry lookup > Also I am not sure about order of checks in (nf_ct_is_dying(ct) || > !atomic_inc_not_zero(&ct->ct_general.use)), because checking state > before taking the ref is only a best-effort hint, so it can actually > be a dying entry when we take a ref. Yes, it can also become a dying entry after we took the reference. > So shouldn't it read something like the following? > > rcu_read_lock(); > begin: > h = ____nf_conntrack_find(net, zone, tuple, hash); > if (h) { > ct = nf_ct_tuplehash_to_ctrack(h); > if (!atomic_inc_not_zero(&ct->ct_general.use)) > goto begin; > if (unlikely(nf_ct_is_dying(ct)) || > unlikely(!nf_ct_key_equal(h, tuple, zone, net))) { > nf_ct_put(ct); It would be ok to make this change, but dying bit can be set at any time e.g. because userspace tells kernel to flush the conntrack table. So refcount is always > 0 when the DYING bit is set. I don't see why it would be a problem. nf_conn struct will stay valid until all cpus have dropped references. The check in lookup function only serves to hide the known-to-go-away entry.