From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C8B2C43463 for ; Sun, 20 Sep 2020 17:03:37 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2CCD720EDD for ; Sun, 20 Sep 2020 17:03:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="O7YDY5EZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2CCD720EDD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1A3376E057; Sun, 20 Sep 2020 17:03:36 +0000 (UTC) Received: from mail-lj1-x241.google.com (mail-lj1-x241.google.com [IPv6:2a00:1450:4864:20::241]) by gabe.freedesktop.org (Postfix) with ESMTPS id D31586E057 for ; Sun, 20 Sep 2020 17:03:32 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id u21so9132467ljl.6 for ; Sun, 20 Sep 2020 10:03:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux-foundation.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Govkq2c7yVsAEyCudSuE8qhnS6+zy4O8TiKkmkv2nIk=; b=O7YDY5EZgHd0ux+0JI623n4+xCNSLKkbNn75HK0PfnOR4wlB9CfHJM9R8MpDNxC4Bt StinnqnJ77DBvptD2dta7OTELHl1G548xZdBUVNIayCFE6KgnVNuKWIqgtvMj+8UMN/e 7pSxwfkCe8prPQ3KSbQLetFJedeblKFxQgFMg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Govkq2c7yVsAEyCudSuE8qhnS6+zy4O8TiKkmkv2nIk=; b=fL6Vg3MWG4JUHXgFojq8eVhG/1CCFGtGKEk2SIMdxfIWlBBzyn5jQhTkZN8nEXCtZq +BRZ6H+2UdxOBxbXRDp6Fb6g3gCq/pBF0cCu1HT7DVbA4GVreWD6Ru+B6xu328txAH5L sGEUAoCvlT4BvM9v/LIQVZMEH4RuEtRn2t4Nfl4ajXYLAssMz7tBNjzOvy5gXdMu+acr Q+NW2gxWlkCCDY3Je6s7o85BhhXldbIjTImNedCM5JCaHvWoiLjyAs0vkChLR/+wmXv9 bms81ZW1YbVCxGKvk3giMHjHc0H7J78RuuRWG3fvM5y+/GTne3AevFv8jPb/EunulQMZ TH6A== X-Gm-Message-State: AOAM530C010x7KkyVrpYWlgiEYj9LgGfgN2U7Ac/EBeGKlXU0oEnq61O 0FJja3lO6OTQPYL9uMofTyqex8D5XwebRg== X-Google-Smtp-Source: ABdhPJwDD3FMU8Mwqp0lQBhiV4TLvr7uaurfIrw41t6AUoljH7EQfm28z5ZcMSlYsaYMO5VspYzO8w== X-Received: by 2002:a2e:9cc2:: with SMTP id g2mr8295985ljj.77.1600621410681; Sun, 20 Sep 2020 10:03:30 -0700 (PDT) Received: from mail-lf1-f42.google.com (mail-lf1-f42.google.com. [209.85.167.42]) by smtp.gmail.com with ESMTPSA id e14sm2049048ljp.15.2020.09.20.10.03.30 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sun, 20 Sep 2020 10:03:30 -0700 (PDT) Received: by mail-lf1-f42.google.com with SMTP id x69so11459354lff.3 for ; Sun, 20 Sep 2020 10:03:30 -0700 (PDT) X-Received: by 2002:ac2:5594:: with SMTP id v20mr15170982lfg.344.1600621076988; Sun, 20 Sep 2020 09:57:56 -0700 (PDT) MIME-Version: 1.0 References: <20200919091751.011116649@linutronix.de> <87mu1lc5mp.fsf@nanos.tec.linutronix.de> <87k0wode9a.fsf@nanos.tec.linutronix.de> In-Reply-To: <87k0wode9a.fsf@nanos.tec.linutronix.de> From: Linus Torvalds Date: Sun, 20 Sep 2020 09:57:40 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends To: Thomas Gleixner X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Juri Lelli , Peter Zijlstra , Sebastian Andrzej Siewior , dri-devel , linux-mips@vger.kernel.org, Ben Segall , Max Filippov , Guo Ren , linux-sparc , Vincent Chen , Will Deacon , Ard Biesheuvel , linux-arch , Vincent Guittot , Herbert Xu , Michael Ellerman , the arch/x86 maintainers , Russell King , linux-csky@vger.kernel.org, David Airlie , Mel Gorman , "open list:SYNOPSYS ARC ARCHITECTURE" , linux-xtensa@linux-xtensa.org, Paul McKenney , intel-gfx , linuxppc-dev , Steven Rostedt , Rodrigo Vivi , Dietmar Eggemann , Linux ARM , Chris Zankel , Michal Simek , Thomas Bogendoerfer , Nick Hu , Linux-MM , Vineet Gupta , LKML , Arnd Bergmann , Paul Mackerras , Andrew Morton , Daniel Bristot de Oliveira , "David S. Miller" , Greentime Hu Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" On Sun, Sep 20, 2020 at 1:49 AM Thomas Gleixner wrote: > > Actually most usage sites of kmap atomic do not need page faults to be > disabled at all. Right. I think the pagefault disabling has (almost) nothing at all to do with the kmap() itself - it comes from the "atomic" part, not the "kmap" part. I say *almost*, because there is one issue that needs some thought: the amount of kmap nesting. The kmap_atomic() interface - and your local/temporary/whatever versions of it - depends very much inherently on being strictly nesting. In fact, it depends so much on it that maybe that should be part of the new name? It's very wrong to do addr1 = kmap_atomic(); addr2 = kmap_atomic(); ..do something with addr 1.. kunmap_atomic(addr1); .. do something with addr 2.. kunmap_atomic(addr2); because the way we allocate the slots is by using a percpu-atomic inc-return (and we deallocate using dec). So it's fundamentally a stack. And that's perfectly fine for page faults - if they do any kmaps, those will obviously nest. So the only issue with page faults might be that the stack grows _larger_. And that might need some thought. We already make the kmap stack bigger for CONFIG_DEBUG_HIGHMEM, and it's possibly that if we allow page faults we need to make the kmap stack bigger still. Btw, looking at the stack code, Ithink your new implementation of it is a bit scary: static inline int kmap_atomic_idx_push(void) { - int idx = __this_cpu_inc_return(__kmap_atomic_idx) - 1; + int idx = current->kmap_ctrl.idx++; and now that 'current->kmap_ctrl.idx' is not atomic wrt (a) NMI's (this may be ok, maybe we never do kmaps in NMIs, and with nesting I think it's fine anyway - the NMI will undo whatever it did) (b) the prev/next switch And that (b) part worries me. You do the kmap_switch_temporary() to switch the entries, but you do that *separately* from actually switching 'current' to the new value. So kmap_switch_temporary() looks safe, but I don't think it actually is. Because while it first unmaps the old entries and then remaps the new ones, an interrupt can come in, and at that point it matters what is *CURRENT*. And regardless of whether 'current' is 'prev' or 'next', that kmap_switch_temporary() loop may be doing the wrong thing, depending on which one had the deeper stack. The interrupt will be using whatever "current->kmap_ctrl.idx" is, but that might overwrite entries that are in the process of being restored (if current is still 'prev', but kmap_switch_temporary() is in the "restore @next's kmaps" pgase), or it might stomp on entries that have been pte_clear()'ed by the 'prev' thing. I dunno. The latter may be one of those "it works anyway, it overwrites things we don't care about", but the former will most definitely not work. And it will be completely impossible to debug, because it will depend on an interrupt that uses kmap_local/atomic/whatever() coming in _just_ at the right point in the scheduler, and only when the scheduler has been entered with the right number of kmap entries on the prev/next stack. And no developer will ever see this with any amount of debug code enabled, because it will only hit on legacy platforms that do this kmap anyway. So honestly, that code scares me. I think it's buggy. And even if it "happens to work", it does so for all the wrong reasons, and is very fragile. So I would suggest: - continue to use an actual per-cpu kmap_atomic_idx - make the switching code save the old idx, then unmap the old entries one by one (while doing the proper "pop" action), and then map the new entries one by one (while doing the proper "push" action). which would mean that the only index that is actually ever *USED* is the percpu one, and it's always up-to-date and pushed/popped for individual entries, rather than this - imho completely bogus - optimization where you use "p->kmap_ctrl.idx" directly and very very unsafely. Alternatively, that process counter would need about a hundred lines of commentary about exactly why it's safe. Because I don't think it is. Linus _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel