From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E712C6FD19 for ; Sun, 12 Mar 2023 20:47:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229783AbjCLUrJ (ORCPT ); Sun, 12 Mar 2023 16:47:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229437AbjCLUrJ (ORCPT ); Sun, 12 Mar 2023 16:47:09 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB1B7C64C for ; Sun, 12 Mar 2023 13:47:07 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 5B6F6B80D65 for ; Sun, 12 Mar 2023 20:47:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E43FAC433EF; Sun, 12 Mar 2023 20:47:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1678654025; bh=26u+nBjH4VF5ziGuopQ2Zuwl69le+T703CFIBn7dCzs=; h=Date:To:From:Subject:From; b=vjyISCZu3m7Bt5b557voY8hV88DRB5awNFOzyYfPXf1bzSLL/wv2iTFNWNyCqU/sL Un+T49twZS66/i9bVLJGktxo7lfM3Dt0tXIFKSSPei3eTZGImddvibvV2EdqWVrVbx R59cIcGPGWFUvacR9XWQZOvEhUmh8ephh/iAcack= Date: Sun, 12 Mar 2023 13:47:04 -0700 To: mm-commits@vger.kernel.org, tglx@linutronix.de, richard.weinberger@gmail.com, bigeasy@linutronix.de, akpm@linux-foundation.org From: Andrew Morton Subject: + io-mapping-dont-disable-preempt-on-rt-in-io_mapping_map_atomic_wc.patch added to mm-unstable branch Message-Id: <20230312204704.E43FAC433EF@smtp.kernel.org> Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org The patch titled Subject: io-mapping: don't disable preempt on RT in io_mapping_map_atomic_wc(). has been added to the -mm mm-unstable branch. Its filename is io-mapping-dont-disable-preempt-on-rt-in-io_mapping_map_atomic_wc.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/io-mapping-dont-disable-preempt-on-rt-in-io_mapping_map_atomic_wc.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Sebastian Andrzej Siewior Subject: io-mapping: don't disable preempt on RT in io_mapping_map_atomic_wc(). Date: Fri, 10 Mar 2023 17:29:05 +0100 io_mapping_map_atomic_wc() disables preemption and pagefaults for historical reasons. The conversion to io_mapping_map_local_wc(), which only disables migration, cannot be done wholesale because quite some call sites need to be updated to accommodate with the changed semantics. On PREEMPT_RT enabled kernels the io_mapping_map_atomic_wc() semantics are problematic due to the implicit disabling of preemption which makes it impossible to acquire 'sleeping' spinlocks within the mapped atomic sections. PREEMPT_RT replaces the preempt_disable() with a migrate_disable() for more than a decade. It could be argued that this is a justification to do this unconditionally, but PREEMPT_RT covers only a limited number of architectures and it disables some functionality which limits the coverage further. Limit the replacement to PREEMPT_RT for now. This is also done kmap_atomic(). Link: https://lkml.kernel.org/r/20230310162905.O57Pj7hh@linutronix.de Signed-off-by: Sebastian Andrzej Siewior Reported-by: Richard Weinberger Link: https://lore.kernel.org/CAFLxGvw0WMxaMqYqJ5WgvVSbKHq2D2xcXTOgMCpgq9nDC-MWTQ@mail.gmail.com Cc: Thomas Gleixner Signed-off-by: Andrew Morton --- --- a/include/linux/io-mapping.h~io-mapping-dont-disable-preempt-on-rt-in-io_mapping_map_atomic_wc +++ a/include/linux/io-mapping.h @@ -69,7 +69,10 @@ io_mapping_map_atomic_wc(struct io_mappi BUG_ON(offset >= mapping->size); phys_addr = mapping->base + offset; - preempt_disable(); + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + preempt_disable(); + else + migrate_disable(); pagefault_disable(); return __iomap_local_pfn_prot(PHYS_PFN(phys_addr), mapping->prot); } @@ -79,7 +82,10 @@ io_mapping_unmap_atomic(void __iomem *va { kunmap_local_indexed((void __force *)vaddr); pagefault_enable(); - preempt_enable(); + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + preempt_enable(); + else + migrate_enable(); } static inline void __iomem * @@ -162,7 +168,10 @@ static inline void __iomem * io_mapping_map_atomic_wc(struct io_mapping *mapping, unsigned long offset) { - preempt_disable(); + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + preempt_disable(); + else + migrate_disable(); pagefault_disable(); return io_mapping_map_wc(mapping, offset, PAGE_SIZE); } @@ -172,7 +181,10 @@ io_mapping_unmap_atomic(void __iomem *va { io_mapping_unmap(vaddr); pagefault_enable(); - preempt_enable(); + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + preempt_enable(); + else + migrate_enable(); } static inline void __iomem * _ Patches currently in -mm which might be from bigeasy@linutronix.de are io-mapping-dont-disable-preempt-on-rt-in-io_mapping_map_atomic_wc.patch