From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52D1FC28CF8 for ; Tue, 20 Nov 2018 14:21:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 21919206BA for ; Tue, 20 Nov 2018 14:21:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 21919206BA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729754AbeKUAu0 (ORCPT ); Tue, 20 Nov 2018 19:50:26 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:40331 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725995AbeKUAuZ (ORCPT ); Tue, 20 Nov 2018 19:50:25 -0500 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 34E5575C12150; Tue, 20 Nov 2018 22:21:00 +0800 (CST) Received: from localhost.localdomain (10.175.124.28) by smtp.huawei.com (10.3.19.204) with Microsoft SMTP Server (TLS) id 14.3.408.0; Tue, 20 Nov 2018 22:20:53 +0800 From: Gao Xiang To: Greg Kroah-Hartman , CC: LKML , , "Chao Yu" , Miao Xie , , Gao Xiang Subject: [PATCH 04/10] staging: erofs: fix `erofs_workgroup_{try_to_freeze, unfreeze}' Date: Tue, 20 Nov 2018 22:34:19 +0800 Message-ID: <20181120143425.43637-5-gaoxiang25@huawei.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20181120143425.43637-1-gaoxiang25@huawei.com> References: <20181120143425.43637-1-gaoxiang25@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.175.124.28] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There are two minor issues in the current freeze interface: 1) Freeze interfaces have not related with CONFIG_DEBUG_SPINLOCK, therefore fix the incorrect conditions; 2) For SMP platforms, it should also disable preemption before doing atomic_cmpxchg in case that some high priority tasks preempt between atomic_cmpxchg and disable_preempt, then spin on the locked refcount later. Reviewed-by: Chao Yu Signed-off-by: Gao Xiang --- drivers/staging/erofs/internal.h | 41 ++++++++++++++++++++++++---------------- 1 file changed, 25 insertions(+), 16 deletions(-) diff --git a/drivers/staging/erofs/internal.h b/drivers/staging/erofs/internal.h index eb80ba44d072..2e0ef92c138b 100644 --- a/drivers/staging/erofs/internal.h +++ b/drivers/staging/erofs/internal.h @@ -194,40 +194,49 @@ struct erofs_workgroup { #define EROFS_LOCKED_MAGIC (INT_MIN | 0xE0F510CCL) -static inline bool erofs_workgroup_try_to_freeze( - struct erofs_workgroup *grp, int v) +#if defined(CONFIG_SMP) +static inline bool erofs_workgroup_try_to_freeze(struct erofs_workgroup *grp, + int val) { -#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) - if (v != atomic_cmpxchg(&grp->refcount, - v, EROFS_LOCKED_MAGIC)) - return false; preempt_disable(); -#else - preempt_disable(); - if (atomic_read(&grp->refcount) != v) { + if (val != atomic_cmpxchg(&grp->refcount, val, EROFS_LOCKED_MAGIC)) { preempt_enable(); return false; } -#endif return true; } -static inline void erofs_workgroup_unfreeze( - struct erofs_workgroup *grp, int v) +static inline void erofs_workgroup_unfreeze(struct erofs_workgroup *grp, + int orig_val) { -#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) - atomic_set(&grp->refcount, v); -#endif + atomic_set(&grp->refcount, orig_val); preempt_enable(); } -#if defined(CONFIG_SMP) static inline int erofs_wait_on_workgroup_freezed(struct erofs_workgroup *grp) { return atomic_cond_read_relaxed(&grp->refcount, VAL != EROFS_LOCKED_MAGIC); } #else +static inline bool erofs_workgroup_try_to_freeze(struct erofs_workgroup *grp, + int val) +{ + preempt_disable(); + /* no need to spin on UP platforms, let's just disable preemption. */ + if (val != atomic_read(&grp->refcount)) { + preempt_enable(); + return false; + } + return true; +} + +static inline void erofs_workgroup_unfreeze(struct erofs_workgroup *grp, + int orig_val) +{ + preempt_enable(); +} + static inline int erofs_wait_on_workgroup_freezed(struct erofs_workgroup *grp) { int v = atomic_read(&grp->refcount); -- 2.14.4 From mboxrd@z Thu Jan 1 00:00:00 1970 From: gaoxiang25@huawei.com (Gao Xiang) Date: Tue, 20 Nov 2018 22:34:19 +0800 Subject: [PATCH 04/10] staging: erofs: fix `erofs_workgroup_{try_to_freeze, unfreeze}' In-Reply-To: <20181120143425.43637-1-gaoxiang25@huawei.com> References: <20181120143425.43637-1-gaoxiang25@huawei.com> Message-ID: <20181120143425.43637-5-gaoxiang25@huawei.com> There are two minor issues in the current freeze interface: 1) Freeze interfaces have not related with CONFIG_DEBUG_SPINLOCK, therefore fix the incorrect conditions; 2) For SMP platforms, it should also disable preemption before doing atomic_cmpxchg in case that some high priority tasks preempt between atomic_cmpxchg and disable_preempt, then spin on the locked refcount later. Reviewed-by: Chao Yu Signed-off-by: Gao Xiang --- drivers/staging/erofs/internal.h | 41 ++++++++++++++++++++++++---------------- 1 file changed, 25 insertions(+), 16 deletions(-) diff --git a/drivers/staging/erofs/internal.h b/drivers/staging/erofs/internal.h index eb80ba44d072..2e0ef92c138b 100644 --- a/drivers/staging/erofs/internal.h +++ b/drivers/staging/erofs/internal.h @@ -194,40 +194,49 @@ struct erofs_workgroup { #define EROFS_LOCKED_MAGIC (INT_MIN | 0xE0F510CCL) -static inline bool erofs_workgroup_try_to_freeze( - struct erofs_workgroup *grp, int v) +#if defined(CONFIG_SMP) +static inline bool erofs_workgroup_try_to_freeze(struct erofs_workgroup *grp, + int val) { -#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) - if (v != atomic_cmpxchg(&grp->refcount, - v, EROFS_LOCKED_MAGIC)) - return false; preempt_disable(); -#else - preempt_disable(); - if (atomic_read(&grp->refcount) != v) { + if (val != atomic_cmpxchg(&grp->refcount, val, EROFS_LOCKED_MAGIC)) { preempt_enable(); return false; } -#endif return true; } -static inline void erofs_workgroup_unfreeze( - struct erofs_workgroup *grp, int v) +static inline void erofs_workgroup_unfreeze(struct erofs_workgroup *grp, + int orig_val) { -#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) - atomic_set(&grp->refcount, v); -#endif + atomic_set(&grp->refcount, orig_val); preempt_enable(); } -#if defined(CONFIG_SMP) static inline int erofs_wait_on_workgroup_freezed(struct erofs_workgroup *grp) { return atomic_cond_read_relaxed(&grp->refcount, VAL != EROFS_LOCKED_MAGIC); } #else +static inline bool erofs_workgroup_try_to_freeze(struct erofs_workgroup *grp, + int val) +{ + preempt_disable(); + /* no need to spin on UP platforms, let's just disable preemption. */ + if (val != atomic_read(&grp->refcount)) { + preempt_enable(); + return false; + } + return true; +} + +static inline void erofs_workgroup_unfreeze(struct erofs_workgroup *grp, + int orig_val) +{ + preempt_enable(); +} + static inline int erofs_wait_on_workgroup_freezed(struct erofs_workgroup *grp) { int v = atomic_read(&grp->refcount); -- 2.14.4