From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752384AbdHIUiW (ORCPT ); Wed, 9 Aug 2017 16:38:22 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:42542 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752064AbdHIUiU (ORCPT ); Wed, 9 Aug 2017 16:38:20 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Joel Fernandes , Kees Cook , Leo Yan Subject: [PATCH 3.18 02/92] pstore: Make spinlock per zone instead of global Date: Wed, 9 Aug 2017 13:36:30 -0700 Message-Id: <20170809202155.541034653@linuxfoundation.org> X-Mailer: git-send-email 2.14.0 In-Reply-To: <20170809202155.435709888@linuxfoundation.org> References: <20170809202155.435709888@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.18-stable review patch. If anyone has any objections, please let me know. ------------------ From: Joel Fernandes commit 109704492ef637956265ec2eb72ae7b3b39eb6f4 upstream. Currently pstore has a global spinlock for all zones. Since the zones are independent and modify different areas of memory, there's no need to have a global lock, so we should use a per-zone lock as introduced here. Also, when ramoops's ftrace use-case has a FTRACE_PER_CPU flag introduced later, which splits the ftrace memory area into a single zone per CPU, it will eliminate the need for locking. In preparation for this, make the locking optional. Signed-off-by: Joel Fernandes [kees: updated commit message] Signed-off-by: Kees Cook Cc: Leo Yan Signed-off-by: Greg Kroah-Hartman --- fs/pstore/ram_core.c | 11 +++++------ include/linux/pstore_ram.h | 1 + 2 files changed, 6 insertions(+), 6 deletions(-) --- a/fs/pstore/ram_core.c +++ b/fs/pstore/ram_core.c @@ -80,8 +80,6 @@ static void buffer_size_add_atomic(struc } while (atomic_cmpxchg(&prz->buffer->size, old, new) != old); } -static DEFINE_RAW_SPINLOCK(buffer_lock); - /* increase and wrap the start pointer, returning the old value */ static size_t buffer_start_add_locked(struct persistent_ram_zone *prz, size_t a) { @@ -89,7 +87,7 @@ static size_t buffer_start_add_locked(st int new; unsigned long flags; - raw_spin_lock_irqsave(&buffer_lock, flags); + raw_spin_lock_irqsave(&prz->buffer_lock, flags); old = atomic_read(&prz->buffer->start); new = old + a; @@ -97,7 +95,7 @@ static size_t buffer_start_add_locked(st new -= prz->buffer_size; atomic_set(&prz->buffer->start, new); - raw_spin_unlock_irqrestore(&buffer_lock, flags); + raw_spin_unlock_irqrestore(&prz->buffer_lock, flags); return old; } @@ -109,7 +107,7 @@ static void buffer_size_add_locked(struc size_t new; unsigned long flags; - raw_spin_lock_irqsave(&buffer_lock, flags); + raw_spin_lock_irqsave(&prz->buffer_lock, flags); old = atomic_read(&prz->buffer->size); if (old == prz->buffer_size) @@ -121,7 +119,7 @@ static void buffer_size_add_locked(struc atomic_set(&prz->buffer->size, new); exit: - raw_spin_unlock_irqrestore(&buffer_lock, flags); + raw_spin_unlock_irqrestore(&prz->buffer_lock, flags); } static size_t (*buffer_start_add)(struct persistent_ram_zone *, size_t) = buffer_start_add_atomic; @@ -489,6 +487,7 @@ static int persistent_ram_post_init(stru prz->buffer->sig = sig; persistent_ram_zap(prz); + prz->buffer_lock = __RAW_SPIN_LOCK_UNLOCKED(buffer_lock); return 0; } --- a/include/linux/pstore_ram.h +++ b/include/linux/pstore_ram.h @@ -39,6 +39,7 @@ struct persistent_ram_zone { void *vaddr; struct persistent_ram_buffer *buffer; size_t buffer_size; + raw_spinlock_t buffer_lock; /* ECC correction */ char *par_buffer;