From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF93FC433E4 for ; Mon, 27 Jul 2020 14:11:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7F6FD208E4 for ; Mon, 27 Jul 2020 14:11:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1595859091; bh=GGUUnrT70vCshPJghf6krsi3BKNCu8O77QYQpIvqC9M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=Zb+VOnD64SRBx3arivH7tIcAlWWXRrGFpzh95NxqpwPkDqSZkEMroA98Pl/3ahu1L yaIyW2gFNhsz1vu6Ky0I9GertYfdG0B96O+ptJBVo2UnMs9VzambmbNSnIjDe5z2lA izfNkbzCTh+rdm0uhLukw6SQgKApQVunZtrOyz8A= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729268AbgG0OLa (ORCPT ); Mon, 27 Jul 2020 10:11:30 -0400 Received: from mail.kernel.org ([198.145.29.99]:35096 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728654AbgG0OL3 (ORCPT ); Mon, 27 Jul 2020 10:11:29 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 909C72173E; Mon, 27 Jul 2020 14:11:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1595859089; bh=GGUUnrT70vCshPJghf6krsi3BKNCu8O77QYQpIvqC9M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b1XMcfOUqsH2914v7EkcmuFkKoCZGHnSWmBFkpJ+KT+fxoBVNG6i79jmZSxFfsf7H RvZVCrZdbPm8Lxa3+7lgpuvTrvaVbyMJ+Cox8Oe+DrapeeuAGYASRQWqFyXyKjwfTY 9gJmWHZK+fJERgB0VSS0YvWlm8vuujbJPnYf3n6c= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Palmer Dabbelt , Sasha Levin Subject: [PATCH 4.19 60/86] RISC-V: Upgrade smp_mb__after_spinlock() to iorw,iorw Date: Mon, 27 Jul 2020 16:04:34 +0200 Message-Id: <20200727134917.444705980@linuxfoundation.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200727134914.312934924@linuxfoundation.org> References: <20200727134914.312934924@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Palmer Dabbelt [ Upstream commit 38b7c2a3ffb1fce8358ddc6006cfe5c038ff9963 ] While digging through the recent mmiowb preemption issue it came up that we aren't actually preventing IO from crossing a scheduling boundary. While it's a bit ugly to overload smp_mb__after_spinlock() with this behavior, it's what PowerPC is doing so there's some precedent. Signed-off-by: Palmer Dabbelt Signed-off-by: Sasha Levin --- arch/riscv/include/asm/barrier.h | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h index d4628e4b3a5ea..f4c92c91aa047 100644 --- a/arch/riscv/include/asm/barrier.h +++ b/arch/riscv/include/asm/barrier.h @@ -69,8 +69,16 @@ do { \ * The AQ/RL pair provides a RCpc critical section, but there's not really any * way we can take advantage of that here because the ordering is only enforced * on that one lock. Thus, we're just doing a full fence. + * + * Since we allow writeX to be called from preemptive regions we need at least + * an "o" in the predecessor set to ensure device writes are visible before the + * task is marked as available for scheduling on a new hart. While I don't see + * any concrete reason we need a full IO fence, it seems safer to just upgrade + * this in order to avoid any IO crossing a scheduling boundary. In both + * instances the scheduler pairs this with an mb(), so nothing is necessary on + * the new hart. */ -#define smp_mb__after_spinlock() RISCV_FENCE(rw,rw) +#define smp_mb__after_spinlock() RISCV_FENCE(iorw,iorw) #include -- 2.25.1