From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,URIBL_BLOCKED,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61E2EC04EB9 for ; Wed, 5 Dec 2018 14:41:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2E5C7206B7 for ; Wed, 5 Dec 2018 14:41:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2E5C7206B7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727683AbeLEOlL (ORCPT ); Wed, 5 Dec 2018 09:41:11 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:55828 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727025AbeLEOlK (ORCPT ); Wed, 5 Dec 2018 09:41:10 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E632580D; Wed, 5 Dec 2018 06:41:09 -0800 (PST) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B61A83F59C; Wed, 5 Dec 2018 06:41:09 -0800 (PST) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 778D51AE0BC3; Wed, 5 Dec 2018 14:41:30 +0000 (GMT) Date: Wed, 5 Dec 2018 14:41:30 +0000 From: Will Deacon To: Nicolas Boichat Cc: Vlastimil Babka , Robin Murphy , Christoph Lameter , Michal Hocko , Matthias Brugger , hch@infradead.org, Matthew Wilcox , Joerg Roedel , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Mel Gorman , Levin Alexander , Huaisheng Ye , Mike Rapoport , linux-arm Mailing List , iommu@lists.linux-foundation.org, lkml , linux-mm@kvack.org, Yong Wu , Tomasz Figa , yingjoe.chen@mediatek.com, Hsin-Yi Wang , Daniel Kurtz Subject: Re: [PATCH v2 0/3] iommu/io-pgtable-arm-v7s: Use DMA32 zone for page tables Message-ID: <20181205144130.GA16121@arm.com> References: <20181111090341.120786-1-drinkcat@chromium.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Dec 05, 2018 at 10:04:00AM +0800, Nicolas Boichat wrote: > On Tue, Dec 4, 2018 at 10:35 PM Vlastimil Babka wrote: > > > > On 12/4/18 10:37 AM, Nicolas Boichat wrote: > > > On Sun, Nov 11, 2018 at 5:04 PM Nicolas Boichat wrote: > > >> > > >> This is a follow-up to the discussion in [1], to make sure that the page > > >> tables allocated by iommu/io-pgtable-arm-v7s are contained within 32-bit > > >> physical address space. > > >> > > >> [1] https://lists.linuxfoundation.org/pipermail/iommu/2018-November/030876.html > > > > > > Hi everyone, > > > > > > Let's try to summarize here. > > > > > > First, we confirmed that this is a regression, and IOMMU errors happen > > > on 4.19 and linux-next/master on MT8173 (elm, Acer Chromebook R13). > > > The issue most likely starts from ad67f5a6545f ("arm64: replace > > > ZONE_DMA with ZONE_DMA32"), i.e. 4.15, and presumably breaks a number > > > of Mediatek platforms (and maybe others?). > > > > > > We have a few options here: > > > 1. This series [2], that adds support for GFP_DMA32 slab caches, > > > _without_ adding kmalloc caches (since there are no users of > > > kmalloc(..., GFP_DMA32)). I think I've addressed all the comments on > > > the 3 patches, and AFAICT this solution works fine. > > > 2. genalloc. That works, but unless we preallocate 4MB for L2 tables > > > (which is wasteful as we usually only need a handful of L2 tables), > > > we'll need changes in the core (use GFP_ATOMIC) to allow allocating on > > > demand, and as it stands we'd have no way to shrink the allocation. > > > 3. page_frag [3]. That works fine, and the code is quite simple. One > > > drawback is that fragments in partially freed pages cannot be reused > > > (from limited experiments, I see that IOMMU L2 tables are rarely > > > freed, so it's unlikely a whole page would get freed). But given the > > > low number of L2 tables, maybe we can live with that. > > > > > > I think 2 is out. Any preference between 1 and 3? I think 1 makes > > > better use of the memory, so that'd be my preference. But I'm probably > > > missing something. > > > > I would prefer 1 as well. IIRC you already confirmed that alignment > > requirements are not broken for custom kmem caches even in presence of > > SLUB debug options (and I would say it's a bug to be fixed if they > > weren't). > > > I just asked (and didn't get a reply I think) about your > > ability to handle the GFP_ATOMIC allocation failures. They should be > > rare when only single page allocations are needed for the kmem cache. > > But in case they are not an option, then preallocating would be needed, > > thus probably option 2. > > Oh, sorry, I missed your question. > > I don't have a full answer, but: > - The allocations themselves are rare (I count a few 10s of L2 tables > at most on my system, I assume we rarely have >100), and yes, we only > need a single page, so the failures should be exceptional. > - My change is probably not making anything worse: I assume that even > with the current approach using GFP_DMA slab caches on older kernels, > failures could potentially happen. I don't think we've seen those. If > we are really concerned about this, maybe we'd need to modify > mtk_iommu_map to not hold a spinlock (if that's possible), so we don't > need to use GFP_ATOMIC. I suggest we just keep an eye on such issues, > and address them if they show up (we can even revisit genalloc at that > stage). I think the spinlock is the least of our worries: the map/unmap routines can be called in irq context and may need to allocate second-level tables. Will From mboxrd@z Thu Jan 1 00:00:00 1970 From: Will Deacon Subject: Re: [PATCH v2 0/3] iommu/io-pgtable-arm-v7s: Use DMA32 zone for page tables Date: Wed, 5 Dec 2018 14:41:30 +0000 Message-ID: <20181205144130.GA16121@arm.com> References: <20181111090341.120786-1-drinkcat@chromium.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: Nicolas Boichat Cc: Michal Hocko , Daniel Kurtz , Levin Alexander , linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, Christoph Lameter , Huaisheng Ye , Matthew Wilcox , linux-arm Mailing List , David Rientjes , yingjoe.chen-NuS5LvNUpcJWk0Htik3J/w@public.gmane.org, Vlastimil Babka , Tomasz Figa , Mike Rapoport , Hsin-Yi Wang , Matthias Brugger , Joonsoo Kim , Robin Murphy , lkml , Pekka Enberg , iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, Andrew Morton , Mel Gorman List-Id: iommu@lists.linux-foundation.org On Wed, Dec 05, 2018 at 10:04:00AM +0800, Nicolas Boichat wrote: > On Tue, Dec 4, 2018 at 10:35 PM Vlastimil Babka wrote: > > > > On 12/4/18 10:37 AM, Nicolas Boichat wrote: > > > On Sun, Nov 11, 2018 at 5:04 PM Nicolas Boichat wrote: > > >> > > >> This is a follow-up to the discussion in [1], to make sure that the page > > >> tables allocated by iommu/io-pgtable-arm-v7s are contained within 32-bit > > >> physical address space. > > >> > > >> [1] https://lists.linuxfoundation.org/pipermail/iommu/2018-November/030876.html > > > > > > Hi everyone, > > > > > > Let's try to summarize here. > > > > > > First, we confirmed that this is a regression, and IOMMU errors happen > > > on 4.19 and linux-next/master on MT8173 (elm, Acer Chromebook R13). > > > The issue most likely starts from ad67f5a6545f ("arm64: replace > > > ZONE_DMA with ZONE_DMA32"), i.e. 4.15, and presumably breaks a number > > > of Mediatek platforms (and maybe others?). > > > > > > We have a few options here: > > > 1. This series [2], that adds support for GFP_DMA32 slab caches, > > > _without_ adding kmalloc caches (since there are no users of > > > kmalloc(..., GFP_DMA32)). I think I've addressed all the comments on > > > the 3 patches, and AFAICT this solution works fine. > > > 2. genalloc. That works, but unless we preallocate 4MB for L2 tables > > > (which is wasteful as we usually only need a handful of L2 tables), > > > we'll need changes in the core (use GFP_ATOMIC) to allow allocating on > > > demand, and as it stands we'd have no way to shrink the allocation. > > > 3. page_frag [3]. That works fine, and the code is quite simple. One > > > drawback is that fragments in partially freed pages cannot be reused > > > (from limited experiments, I see that IOMMU L2 tables are rarely > > > freed, so it's unlikely a whole page would get freed). But given the > > > low number of L2 tables, maybe we can live with that. > > > > > > I think 2 is out. Any preference between 1 and 3? I think 1 makes > > > better use of the memory, so that'd be my preference. But I'm probably > > > missing something. > > > > I would prefer 1 as well. IIRC you already confirmed that alignment > > requirements are not broken for custom kmem caches even in presence of > > SLUB debug options (and I would say it's a bug to be fixed if they > > weren't). > > > I just asked (and didn't get a reply I think) about your > > ability to handle the GFP_ATOMIC allocation failures. They should be > > rare when only single page allocations are needed for the kmem cache. > > But in case they are not an option, then preallocating would be needed, > > thus probably option 2. > > Oh, sorry, I missed your question. > > I don't have a full answer, but: > - The allocations themselves are rare (I count a few 10s of L2 tables > at most on my system, I assume we rarely have >100), and yes, we only > need a single page, so the failures should be exceptional. > - My change is probably not making anything worse: I assume that even > with the current approach using GFP_DMA slab caches on older kernels, > failures could potentially happen. I don't think we've seen those. If > we are really concerned about this, maybe we'd need to modify > mtk_iommu_map to not hold a spinlock (if that's possible), so we don't > need to use GFP_ATOMIC. I suggest we just keep an eye on such issues, > and address them if they show up (we can even revisit genalloc at that > stage). I think the spinlock is the least of our worries: the map/unmap routines can be called in irq context and may need to allocate second-level tables. Will From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_PASS, URIBL_BLOCKED,USER_AGENT_MUTT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F57BC04EB9 for ; Wed, 5 Dec 2018 14:41:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0ED6C2087F for ; Wed, 5 Dec 2018 14:41:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="aWiK9iT1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0ED6C2087F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Kw0KTO60f/LM2+3H0sWQtTSqDJ9y4j+PmBmpWK/4PhE=; b=aWiK9iT1WQP/Bk lyLTaF+3Avouq9C9ozr+In+4dJMGvAR3iTB435eXL5rWi+0Ud1fKk/pK+6ILBk67AoTc0pdw1wIyB W2Cz6CZmsP3GiaBUtstXoRp+kvr0u7GNb8zOTOnnzibB2E+kGlYqwgqcgkmh9Wm6ySnhSbHjQo6mO lcA/Frof+DP2VGG4qAMmXc4lSmD5VZdDmHaGBT6AaX/XQX/i7SjIHIYBrFr9nmSidH1119JLnx9+z XjDBBpviAgInb0ljbRjpeAe1Nzp/gpiJlqn14mN1EBl2pzsMC8/vphbDnxIyazWgb00N1o0UgL1mi Lsz9lKjXeoOlti62Xamg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gUYMe-0006hd-8Y; Wed, 05 Dec 2018 14:41:24 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gUYMa-0006gO-AR for linux-arm-kernel@lists.infradead.org; Wed, 05 Dec 2018 14:41:21 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E632580D; Wed, 5 Dec 2018 06:41:09 -0800 (PST) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B61A83F59C; Wed, 5 Dec 2018 06:41:09 -0800 (PST) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 778D51AE0BC3; Wed, 5 Dec 2018 14:41:30 +0000 (GMT) Date: Wed, 5 Dec 2018 14:41:30 +0000 From: Will Deacon To: Nicolas Boichat Subject: Re: [PATCH v2 0/3] iommu/io-pgtable-arm-v7s: Use DMA32 zone for page tables Message-ID: <20181205144130.GA16121@arm.com> References: <20181111090341.120786-1-drinkcat@chromium.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181205_064120_369842_3F1E7F27 X-CRM114-Status: GOOD ( 29.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Michal Hocko , Daniel Kurtz , Levin Alexander , linux-mm@kvack.org, Christoph Lameter , Huaisheng Ye , Joerg Roedel , Matthew Wilcox , hch@infradead.org, linux-arm Mailing List , David Rientjes , yingjoe.chen@mediatek.com, Vlastimil Babka , Tomasz Figa , Mike Rapoport , Hsin-Yi Wang , Matthias Brugger , Joonsoo Kim , Yong Wu , Robin Murphy , lkml , Pekka Enberg , iommu@lists.linux-foundation.org, Andrew Morton , Mel Gorman Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Dec 05, 2018 at 10:04:00AM +0800, Nicolas Boichat wrote: > On Tue, Dec 4, 2018 at 10:35 PM Vlastimil Babka wrote: > > > > On 12/4/18 10:37 AM, Nicolas Boichat wrote: > > > On Sun, Nov 11, 2018 at 5:04 PM Nicolas Boichat wrote: > > >> > > >> This is a follow-up to the discussion in [1], to make sure that the page > > >> tables allocated by iommu/io-pgtable-arm-v7s are contained within 32-bit > > >> physical address space. > > >> > > >> [1] https://lists.linuxfoundation.org/pipermail/iommu/2018-November/030876.html > > > > > > Hi everyone, > > > > > > Let's try to summarize here. > > > > > > First, we confirmed that this is a regression, and IOMMU errors happen > > > on 4.19 and linux-next/master on MT8173 (elm, Acer Chromebook R13). > > > The issue most likely starts from ad67f5a6545f ("arm64: replace > > > ZONE_DMA with ZONE_DMA32"), i.e. 4.15, and presumably breaks a number > > > of Mediatek platforms (and maybe others?). > > > > > > We have a few options here: > > > 1. This series [2], that adds support for GFP_DMA32 slab caches, > > > _without_ adding kmalloc caches (since there are no users of > > > kmalloc(..., GFP_DMA32)). I think I've addressed all the comments on > > > the 3 patches, and AFAICT this solution works fine. > > > 2. genalloc. That works, but unless we preallocate 4MB for L2 tables > > > (which is wasteful as we usually only need a handful of L2 tables), > > > we'll need changes in the core (use GFP_ATOMIC) to allow allocating on > > > demand, and as it stands we'd have no way to shrink the allocation. > > > 3. page_frag [3]. That works fine, and the code is quite simple. One > > > drawback is that fragments in partially freed pages cannot be reused > > > (from limited experiments, I see that IOMMU L2 tables are rarely > > > freed, so it's unlikely a whole page would get freed). But given the > > > low number of L2 tables, maybe we can live with that. > > > > > > I think 2 is out. Any preference between 1 and 3? I think 1 makes > > > better use of the memory, so that'd be my preference. But I'm probably > > > missing something. > > > > I would prefer 1 as well. IIRC you already confirmed that alignment > > requirements are not broken for custom kmem caches even in presence of > > SLUB debug options (and I would say it's a bug to be fixed if they > > weren't). > > > I just asked (and didn't get a reply I think) about your > > ability to handle the GFP_ATOMIC allocation failures. They should be > > rare when only single page allocations are needed for the kmem cache. > > But in case they are not an option, then preallocating would be needed, > > thus probably option 2. > > Oh, sorry, I missed your question. > > I don't have a full answer, but: > - The allocations themselves are rare (I count a few 10s of L2 tables > at most on my system, I assume we rarely have >100), and yes, we only > need a single page, so the failures should be exceptional. > - My change is probably not making anything worse: I assume that even > with the current approach using GFP_DMA slab caches on older kernels, > failures could potentially happen. I don't think we've seen those. If > we are really concerned about this, maybe we'd need to modify > mtk_iommu_map to not hold a spinlock (if that's possible), so we don't > need to use GFP_ATOMIC. I suggest we just keep an eye on such issues, > and address them if they show up (we can even revisit genalloc at that > stage). I think the spinlock is the least of our worries: the map/unmap routines can be called in irq context and may need to allocate second-level tables. Will _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel