From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 313C9C35247 for ; Mon, 3 Feb 2020 17:42:05 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 87D992051A for ; Mon, 3 Feb 2020 17:42:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="N95/11Ys" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 87D992051A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17659-kernel-hardening=archiver.kernel.org@lists.openwall.com Received: (qmail 23643 invoked by uid 550); 3 Feb 2020 17:41:58 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Received: (qmail 23617 invoked from network); 3 Feb 2020 17:41:58 -0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=m5X7HEtnOnQXbN/sddNWpeOVFq9PT+3WrfrbyQzTz3g=; b=N95/11YsOrwnHffWbeoB4BWA+u hqp995pRji8gqPPrZYbqJL5zhdfvxVSic0OOcMqULsBeX6tGPtfU3KtjW1tRTB6XpRs+HyEBNJzD4 8DBu76kx5P4mUsBN6q0OMKp9Hg9WEI1xxxkaNSLQ3936nEYmpAicjUQiBHztOgAoUV1rEuzlnXfIO jTnnMUjjtZbqJgcDBxZHgqP58s/4x1/ameMy2ZHWrRGmG5z7dBe2eKS0sB/OD2oPon2CmCEh3DeBx 9jRDnQwVxjGLB/f1Pg5JlX/gHdtK1CACCetBB7Q8XzvfBgg5HzV+bwT+rbaDYQK3/K43/9xHpN3kR bMEJTLcg==; Date: Mon, 3 Feb 2020 09:41:34 -0800 From: Christoph Hellwig To: Matthew Wilcox Cc: Jann Horn , Kees Cook , Christian Borntraeger , Christoph Hellwig , Christopher Lameter , Jiri Slaby , Julian Wiedmann , Ursula Braun , Alexander Viro , kernel list , David Windsor , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Linux-MM , linux-xfs@vger.kernel.org, Linus Torvalds , Andy Lutomirski , "David S. Miller" , Laura Abbott , Mark Rutland , "Martin K. Petersen" , Paolo Bonzini , Dave Kleikamp , Jan Kara , Marc Zyngier , Matthew Garrett , linux-fsdevel , linux-arch , Network Development , Kernel Hardening , Vlastimil Babka , Michal Kubecek Subject: Re: [kernel-hardening] [PATCH 09/38] usercopy: Mark kmalloc caches as usercopy caches Message-ID: <20200203174134.GC30011@infradead.org> References: <202001281457.FA11CC313A@keescook> <6844ea47-8e0e-4fb7-d86f-68046995a749@de.ibm.com> <20200129170939.GA4277@infradead.org> <771c5511-c5ab-3dd1-d938-5dbc40396daa@de.ibm.com> <202001300945.7D465B5F5@keescook> <202002010952.ACDA7A81@keescook> <20200203074644.GD8731@bombadil.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200203074644.GD8731@bombadil.infradead.org> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html On Sun, Feb 02, 2020 at 11:46:44PM -0800, Matthew Wilcox wrote: > > gives the hardware access to completely unrelated memory.) Instead, > > they get pages from the page allocator, and these pages may e.g. be > > allocated from the DMA, DMA32 or NORMAL zones depending on the > > restrictions imposed by hardware. So I think the usercopy restriction > > only affects a few oddball drivers (like this s390 stuff), which is > > why you're not seeing more bug reports caused by this. > > Getting pages from the page allocator is true for dma_alloc_coherent() > and friends. dma_alloc_coherent also has a few other memory sources than the page allocator.. > But it's not true for streaming DMA mappings (dma_map_*) > for which the memory usually comes from kmalloc(). If this is something > we want to fix (and I have an awful feeling we're going to regret it > if we say "no, we trust the hardware"), we're going to have to come up > with a new memory allocation API for these cases. Or bounce bugger the > memory for devices we don't trust. There aren't too many places that use slab allocations for streaming mappings, and then do usercopies into them. But I vaguely remember some usb code getting the annotations for that a while ago. But if you don't trust your hardware you will need to use IOMMUs and page aligned memory, or IOMMUs plus bounce buffering for the kmalloc allocations (we've recently grown code to do that for the intel-iommu driver, which needs to be lifted into the generic IOMMU code).