From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933312AbdCGV0h (ORCPT ); Tue, 7 Mar 2017 16:26:37 -0500 Received: from mail.kernel.org ([198.145.29.136]:46012 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755925AbdCGVYI (ORCPT ); Tue, 7 Mar 2017 16:24:08 -0500 Message-Id: <20170307212313.787317186@goodmis.org> User-Agent: quilt/0.63-1 Date: Tue, 07 Mar 2017 16:20:45 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Paul Gortmaker , Julia Cartwright , stable-rt@vger.kernel.org, "Peter Zijlstra (Intel)" , John Ogness Subject: [PATCH RT 04/10] x86/mm/cpa: avoid wbinvd() for PREEMPT References: <20170307212041.431615000@goodmis.org> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 Content-Disposition: inline; filename=0004-x86-mm-cpa-avoid-wbinvd-for-PREEMPT.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.4.50-rt63-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: John Ogness Although wbinvd() is faster than flushing many individual pages, it blocks the memory bus for "long" periods of time (>100us), thus directly causing unusually large latencies on all CPUs, regardless of any CPU isolation features that may be active. For 1024 pages, flushing those pages individually can take up to 2200us, but the task remains fully preemptible during that time. Cc: stable-rt@vger.kernel.org Acked-by: Peter Zijlstra (Intel) Signed-off-by: John Ogness Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- arch/x86/mm/pageattr.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index b599a780a5a9..2e85c4117daf 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -208,7 +208,15 @@ static void cpa_flush_array(unsigned long *start, int numpages, int cache, int in_flags, struct page **pages) { unsigned int i, level; +#ifdef CONFIG_PREEMPT + /* + * Avoid wbinvd() because it causes latencies on all CPUs, + * regardless of any CPU isolation that may be in effect. + */ + unsigned long do_wbinvd = 0; +#else unsigned long do_wbinvd = cache && numpages >= 1024; /* 4M threshold */ +#endif BUG_ON(irqs_disabled()); -- 2.10.2