From mboxrd@z Thu Jan 1 00:00:00 1970 From: keescook@chromium.org (Kees Cook) Date: Mon, 31 Jul 2017 14:16:21 -0700 Subject: [PATCH v4] arm64: kernel: implement fast refcount checking In-Reply-To: <20170731192251.12491-1-ard.biesheuvel@linaro.org> References: <20170731192251.12491-1-ard.biesheuvel@linaro.org> Message-ID: To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On Mon, Jul 31, 2017 at 12:22 PM, Ard Biesheuvel wrote: > v4: Implement add-from-zero checking using a conditional compare rather than > a conditional branch, which I omitted from v3 due to the 10% performance > hit: this will result in the new refcount to be written back to memory > before invoking the handler, which is more in line with the other checks, > and is apparently much easier on the branch predictor, given that there > is no performance hit whatsoever. So refcount_inc() and refcount_add(n, ...) will write 1 and n respectively, then hit the handler to saturate? That seems entirely fine to me: checking inc-from-zero is just a protection against a possible double-free condition. It's still technically a race, but a narrow race on a rare condition is better than being able to always win it. Nice! -Kees -- Kees Cook Pixel Security From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 Sender: keescook@google.com In-Reply-To: <20170731192251.12491-1-ard.biesheuvel@linaro.org> References: <20170731192251.12491-1-ard.biesheuvel@linaro.org> From: Kees Cook Date: Mon, 31 Jul 2017 14:16:21 -0700 Message-ID: Content-Type: text/plain; charset="UTF-8" Subject: [kernel-hardening] Re: [PATCH v4] arm64: kernel: implement fast refcount checking To: Ard Biesheuvel Cc: "linux-arm-kernel@lists.infradead.org" , "kernel-hardening@lists.openwall.com" , Will Deacon , Mark Rutland , Laura Abbott , Li Kun List-ID: On Mon, Jul 31, 2017 at 12:22 PM, Ard Biesheuvel wrote: > v4: Implement add-from-zero checking using a conditional compare rather than > a conditional branch, which I omitted from v3 due to the 10% performance > hit: this will result in the new refcount to be written back to memory > before invoking the handler, which is more in line with the other checks, > and is apparently much easier on the branch predictor, given that there > is no performance hit whatsoever. So refcount_inc() and refcount_add(n, ...) will write 1 and n respectively, then hit the handler to saturate? That seems entirely fine to me: checking inc-from-zero is just a protection against a possible double-free condition. It's still technically a race, but a narrow race on a rare condition is better than being able to always win it. Nice! -Kees -- Kees Cook Pixel Security