From mboxrd@z Thu Jan 1 00:00:00 1970 From: ard.biesheuvel@linaro.org (Ard Biesheuvel) Date: Mon, 31 Jul 2017 22:21:22 +0100 Subject: [PATCH v4] arm64: kernel: implement fast refcount checking In-Reply-To: References: <20170731192251.12491-1-ard.biesheuvel@linaro.org> Message-ID: To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On 31 July 2017 at 22:16, Kees Cook wrote: > On Mon, Jul 31, 2017 at 12:22 PM, Ard Biesheuvel > wrote: >> v4: Implement add-from-zero checking using a conditional compare rather than >> a conditional branch, which I omitted from v3 due to the 10% performance >> hit: this will result in the new refcount to be written back to memory >> before invoking the handler, which is more in line with the other checks, >> and is apparently much easier on the branch predictor, given that there >> is no performance hit whatsoever. > > So refcount_inc() and refcount_add(n, ...) will write 1 and n > respectively, then hit the handler to saturate? Yes, but this is essentially what occurs on overflow and sub-to-zero as well: the result is always stored before hitting the handler. Isn't this the case for x86 as well? > That seems entirely > fine to me: checking inc-from-zero is just a protection against a > possible double-free condition. It's still technically a race, but a > narrow race on a rare condition is better than being able to always > win it. > Indeed. > Nice! > Thanks! From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 In-Reply-To: References: <20170731192251.12491-1-ard.biesheuvel@linaro.org> From: Ard Biesheuvel Date: Mon, 31 Jul 2017 22:21:22 +0100 Message-ID: Content-Type: text/plain; charset="UTF-8" Subject: [kernel-hardening] Re: [PATCH v4] arm64: kernel: implement fast refcount checking To: Kees Cook Cc: "linux-arm-kernel@lists.infradead.org" , "kernel-hardening@lists.openwall.com" , Will Deacon , Mark Rutland , Laura Abbott , Li Kun List-ID: On 31 July 2017 at 22:16, Kees Cook wrote: > On Mon, Jul 31, 2017 at 12:22 PM, Ard Biesheuvel > wrote: >> v4: Implement add-from-zero checking using a conditional compare rather than >> a conditional branch, which I omitted from v3 due to the 10% performance >> hit: this will result in the new refcount to be written back to memory >> before invoking the handler, which is more in line with the other checks, >> and is apparently much easier on the branch predictor, given that there >> is no performance hit whatsoever. > > So refcount_inc() and refcount_add(n, ...) will write 1 and n > respectively, then hit the handler to saturate? Yes, but this is essentially what occurs on overflow and sub-to-zero as well: the result is always stored before hitting the handler. Isn't this the case for x86 as well? > That seems entirely > fine to me: checking inc-from-zero is just a protection against a > possible double-free condition. It's still technically a race, but a > narrow race on a rare condition is better than being able to always > win it. > Indeed. > Nice! > Thanks!