From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98939C43441 for ; Fri, 9 Nov 2018 12:14:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 627F620892 for ; Fri, 9 Nov 2018 12:14:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 627F620892 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727977AbeKIVzI (ORCPT ); Fri, 9 Nov 2018 16:55:08 -0500 Received: from mx2.suse.de ([195.135.220.15]:58858 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727560AbeKIVzH (ORCPT ); Fri, 9 Nov 2018 16:55:07 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id BE12BADFF; Fri, 9 Nov 2018 12:14:43 +0000 (UTC) Subject: Re: [PATCH RFC 1/3] mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA To: Nicolas Boichat Cc: robin.murphy@arm.com, will.deacon@arm.com, joro@8bytes.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, Joonsoo Kim , Andrew Morton , mhocko@suse.com, mgorman@techsingularity.net, yehs1@lenovo.com, rppt@linux.vnet.ibm.com, linux-arm Mailing List , iommu@lists.linux-foundation.org, lkml , linux-mm@kvack.org, yong.wu@mediatek.com, Matthias Brugger , tfiga@google.com, yingjoe.chen@mediatek.com, Alexander.Levin@microsoft.com References: <20181109082448.150302-1-drinkcat@chromium.org> <20181109082448.150302-2-drinkcat@chromium.org> <00afe803-22dd-5a75-70aa-dda0c7752470@suse.cz> From: Vlastimil Babka Openpgp: preference=signencrypt Autocrypt: addr=vbabka@suse.cz; prefer-encrypt=mutual; keydata= xsFNBFZdmxYBEADsw/SiUSjB0dM+vSh95UkgcHjzEVBlby/Fg+g42O7LAEkCYXi/vvq31JTB KxRWDHX0R2tgpFDXHnzZcQywawu8eSq0LxzxFNYMvtB7sV1pxYwej2qx9B75qW2plBs+7+YB 87tMFA+u+L4Z5xAzIimfLD5EKC56kJ1CsXlM8S/LHcmdD9Ctkn3trYDNnat0eoAcfPIP2OZ+ 9oe9IF/R28zmh0ifLXyJQQz5ofdj4bPf8ecEW0rhcqHfTD8k4yK0xxt3xW+6Exqp9n9bydiy tcSAw/TahjW6yrA+6JhSBv1v2tIm+itQc073zjSX8OFL51qQVzRFr7H2UQG33lw2QrvHRXqD Ot7ViKam7v0Ho9wEWiQOOZlHItOOXFphWb2yq3nzrKe45oWoSgkxKb97MVsQ+q2SYjJRBBH4 8qKhphADYxkIP6yut/eaj9ImvRUZZRi0DTc8xfnvHGTjKbJzC2xpFcY0DQbZzuwsIZ8OPJCc LM4S7mT25NE5kUTG/TKQCk922vRdGVMoLA7dIQrgXnRXtyT61sg8PG4wcfOnuWf8577aXP1x 6mzw3/jh3F+oSBHb/GcLC7mvWreJifUL2gEdssGfXhGWBo6zLS3qhgtwjay0Jl+kza1lo+Cv BB2T79D4WGdDuVa4eOrQ02TxqGN7G0Biz5ZLRSFzQSQwLn8fbwARAQABzSFWbGFzdGltaWwg QmFia2EgPHZiYWJrYUBzdXNlLmNvbT7CwZcEEwEKAEECGwMFCwkIBwMFFQoJCAsFFgIDAQAC HgECF4ACGQEWIQSpQNQ0mSwujpkQPVAiT6fnzIKmZAUCWi/zTwUJBbOLuQAKCRAiT6fnzIKm ZIpED/4jRN/6LKZZIT4R2xoou0nJkBGVA3nfb+mUMgi3uwn/zC+o6jjc3ShmP0LQ0cdeuSt/ t2ytstnuARTFVqZT4/IYzZgBsLM8ODFY5vGfPw00tsZMIfFuVPQX3xs0XgLEHw7/1ZCVyJVr mTzYmV3JruwhMdUvIzwoZ/LXjPiEx1MRdUQYHAWwUfsl8lUZeu2QShL3KubR1eH6lUWN2M7t VcokLsnGg4LTajZzZfq2NqCKEQMY3JkAmOu/ooPTrfHCJYMF/5dpi8YF1CkQF/PVbnYbPUuh dRM0m3NzPtn5DdyfFltJ7fobGR039+zoCo6dFF9fPltwcyLlt1gaItfX5yNbOjX3aJSHY2Vc A5T+XAVC2sCwj0lHvgGDz/dTsMM9Ob/6rRJANlJPRWGYk3WVWnbgW8UejCWtn1FkiY/L/4qJ UsqkId8NkkVdVAenCcHQmOGjRQYTpe6Cf4aQ4HGNDeWEm3H8Uq9vmHhXXcPLkxBLRbGDSHyq vUBVaK+dAwAsXn/5PlGxw1cWtur1ep7RDgG3vVQDhIOpAXAg6HULjcbWpBEFaoH720oyGmO5 kV+yHciYO3nPzz/CZJzP5Ki7Q1zqBb/U6gib2at5Ycvews+vTueYO+rOb9sfD8BFTK386LUK uce7E38owtgo/V2GV4LMWqVOy1xtCB6OAUfnGDU2EM7ATQRbGTU1AQgAn0H6UrFiWcovkh6E XVcl+SeqyO6JHOPm+e9Wu0Vw+VIUvXZVUVVQLa1PQDUi6j00ChlcR66g9/V0sPIcSutacPKf dKYOBvzd4rlhL8rfrdEsQw5ApZxrA8kYZVMhFmBRKAa6wos25moTlMKpCWzTH84+WO5+ziCT sTUZASAToz3RdunTD+vQcHj0GqNTPAHK63sfbAB2I0BslZkXkY1RLb/YhuA6E7JyEd2pilZO rIuBGl/5q2qSakgnAVFWFBR/DO27JuAksYnq+aH8vI0xGvwn75KqSk4UzAkDzWSmO4ZHuahK tQgZNsMYV+PGayRBX9b9zbldzopoLBdqHc4njQARAQABwsF8BBgBCgAmFiEEqUDUNJksLo6Z ED1QIk+n58yCpmQFAlsZNTUCGwwFCQPCZwAACgkQIk+n58yCpmQ83g/9Frg1sRMdGPn98zV+ O2eC3h0p5f/oxxQ8MhG5znwHoW4JDG2TuxfcQuz7X7Dd5JWscjlw4VFJ2DD+IrDAGLHwPhCr RyfKalnrbYokvbClM9EuU1oUuh7k+Sg5ECNXEsamW9AiWGCaKWNDdHre3Lf4xl+RJWxghOVW RiUdpLA/a3yDvJNVr6rxkDHQ1P24ZZz/VKDyP+6g8aty2aWEU0YFNjI+rqYZb2OppDx6fdma YnLDcIfDFnkVlDmpznnGCyEqLLyMS3GH52AH13zMT9L9QYgT303+r6QQpKBIxAwn8Jg8dAlV OLhgeHXKr+pOQdFf6iu2sXlUR4MkO/5KWM1K0jFR2ug8Pb3aKOhowVMBT64G0TXhQ/kX4tZ2 ZF0QZLUCHU3Cigvbu4AWWVMNDEOGD/4sn9OoHxm6J04jLUHFUpFKDcjab4NRNWoHLsuLGjve Gdbr2RKO2oJ5qZj81K7os0/5vTAA4qHDP2EETAQcunTn6aPlkUnJ8aw6I1Rwyg7/XsU7gQHF IM/cUMuWWm7OUUPtJeR8loxZiZciU7SMvN1/B9ycPMFs/A6EEzyG+2zKryWry8k7G/pcPrFx O2PkDPy3YmN1RfpIX2HEmnCEFTTCsKgYORangFu/qOcXvM83N+2viXxG4mjLAMiIml1o2lKV cqmP8roqufIAj+Ohhzs= Message-ID: <8f58b778-f8ef-d32e-8803-2a20f171a0b1@suse.cz> Date: Fri, 9 Nov 2018 13:14:41 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/9/18 12:57 PM, Nicolas Boichat wrote: > On Fri, Nov 9, 2018 at 6:43 PM Vlastimil Babka wrote: >> Also I'm probably missing the point of this all. In patch 3 you use >> __get_dma32_pages() thus __get_free_pages(__GFP_DMA32), which uses >> alloc_pages, thus the page allocator directly, and there's no slab >> caches involved. > > __get_dma32_pages fixes level 1 page allocations in the patch 3. > > This change fixes level 2 page allocations > (kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA)), by transparently > remapping GFP_DMA to an underlying ZONE_DMA32. > > The alternative would be to create a new SLAB_CACHE_DMA32 when > CONFIG_ZONE_DMA32 is defined, but then I'm concerned that the callers > would need to choose between the 2 (GFP_DMA or GFP_DMA32...), and also > need to use some ifdefs (but maybe that's not a valid concern?). > >> It makes little sense to involve slab for page table >> allocations anyway, as those tend to be aligned to a page size (or >> high-order page size). So what am I missing? > > Level 2 tables are ARM_V7S_TABLE_SIZE(2) => 1kb, so we'd waste 3kb if > we allocated a full page. Oh, I see. Well, I think indeed the most transparent would be to support SLAB_CACHE_DMA32. The callers of kmem_cache_zalloc() would then need not add anything special to gfp, as that's stored internally upon kmem_cache_create(). Of course SLAB_BUG_MASK would no longer have to treat __GFP_DMA32 as unexpected. It would be unexpected when passed to kmalloc() which doesn't have special dma32 caches, but for a cache explicitly created to allocate from ZONE_DMA32, I don't see why not. I'm somewhat surprised that there wouldn't be a need for this earlier, so maybe I'm still missing something. > Thanks, > From mboxrd@z Thu Jan 1 00:00:00 1970 From: vbabka@suse.cz (Vlastimil Babka) Date: Fri, 9 Nov 2018 13:14:41 +0100 Subject: [PATCH RFC 1/3] mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA In-Reply-To: References: <20181109082448.150302-1-drinkcat@chromium.org> <20181109082448.150302-2-drinkcat@chromium.org> <00afe803-22dd-5a75-70aa-dda0c7752470@suse.cz> Message-ID: <8f58b778-f8ef-d32e-8803-2a20f171a0b1@suse.cz> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On 11/9/18 12:57 PM, Nicolas Boichat wrote: > On Fri, Nov 9, 2018 at 6:43 PM Vlastimil Babka wrote: >> Also I'm probably missing the point of this all. In patch 3 you use >> __get_dma32_pages() thus __get_free_pages(__GFP_DMA32), which uses >> alloc_pages, thus the page allocator directly, and there's no slab >> caches involved. > > __get_dma32_pages fixes level 1 page allocations in the patch 3. > > This change fixes level 2 page allocations > (kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA)), by transparently > remapping GFP_DMA to an underlying ZONE_DMA32. > > The alternative would be to create a new SLAB_CACHE_DMA32 when > CONFIG_ZONE_DMA32 is defined, but then I'm concerned that the callers > would need to choose between the 2 (GFP_DMA or GFP_DMA32...), and also > need to use some ifdefs (but maybe that's not a valid concern?). > >> It makes little sense to involve slab for page table >> allocations anyway, as those tend to be aligned to a page size (or >> high-order page size). So what am I missing? > > Level 2 tables are ARM_V7S_TABLE_SIZE(2) => 1kb, so we'd waste 3kb if > we allocated a full page. Oh, I see. Well, I think indeed the most transparent would be to support SLAB_CACHE_DMA32. The callers of kmem_cache_zalloc() would then need not add anything special to gfp, as that's stored internally upon kmem_cache_create(). Of course SLAB_BUG_MASK would no longer have to treat __GFP_DMA32 as unexpected. It would be unexpected when passed to kmalloc() which doesn't have special dma32 caches, but for a cache explicitly created to allocate from ZONE_DMA32, I don't see why not. I'm somewhat surprised that there wouldn't be a need for this earlier, so maybe I'm still missing something. > Thanks, >