From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CD05C433B4 for ; Thu, 20 May 2021 06:45:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E99F6611AD for ; Thu, 20 May 2021 06:45:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230370AbhETGrN (ORCPT ); Thu, 20 May 2021 02:47:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229534AbhETGrM (ORCPT ); Thu, 20 May 2021 02:47:12 -0400 Received: from mail-io1-xd2b.google.com (mail-io1-xd2b.google.com [IPv6:2607:f8b0:4864:20::d2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5163DC061574 for ; Wed, 19 May 2021 23:45:51 -0700 (PDT) Received: by mail-io1-xd2b.google.com with SMTP id a11so15543367ioo.0 for ; Wed, 19 May 2021 23:45:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=oIhFsW3yOUmKSioHfdtl49xT1nsW2ymAOSCthcMXiio=; b=lM9veaUv42y4nXibj1lGSdifgvjsNbUKUEol+uJziWw6kVRwnZkDD52tXqcK/vhasS PUDj92pN5vsIbUfabcqexhaZYlSojZJ1wq1mzK9ZTms/+1Va4jsCnNJ6mu4u4DoMsmtY 16/lsdYHKt3Z41zBC1z7c65B0IdsqOKp+QHbc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=oIhFsW3yOUmKSioHfdtl49xT1nsW2ymAOSCthcMXiio=; b=kZKczp9ldOUK6ObC0c736ZyBLOJzos/n6yFVoNrOwlrUXAgHsjnxAbmrDtBPPGxN6r VD3mESGEKoh/f4TUIyNCd4o3vTK4dSgHy3FPdamu3rVbh8lzz+zkU6e2AQ+bol1kJ6lj s9A+FTYdMXKJJ9VL9Gslnw4+anF5BLZvlfzOeq1y6zkPg2tCPKhGY2Fa4nKqEf9MSr7k u4G3GVUS4SXYXSnn6NrldEQwRS5Qdk+67rO0pz6EUCp72cb4RoXeOKgkVj55gzGBFYkD WZ6VycoGfP2mG82DuH5358uGLyV7fs4RvNs2pQhiq7vPA6BRlkFLeeFeWR0Hddibxpk9 /8PA== X-Gm-Message-State: AOAM530+B0UHXAUrqW+QE3jSSdsoiwUJZS/wFeKfwcDLTy9Fp3noNp/a 8FC9wi+PncavCvdbTvAFa0ImSWHok7TtUw== X-Google-Smtp-Source: ABdhPJwE7t6f3r4xmdthGcOqygR7GfoTD9d9XHskikoFyz9ChhdKe1FEMhQP/glBZHHMsvGYCNnVIg== X-Received: by 2002:a05:6638:3013:: with SMTP id r19mr4368382jak.36.1621493150432; Wed, 19 May 2021 23:45:50 -0700 (PDT) Received: from mail-il1-f180.google.com (mail-il1-f180.google.com. [209.85.166.180]) by smtp.gmail.com with ESMTPSA id d141sm2104542iof.52.2021.05.19.23.45.50 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 19 May 2021 23:45:50 -0700 (PDT) Received: by mail-il1-f180.google.com with SMTP id g11so10319436ilq.3 for ; Wed, 19 May 2021 23:45:50 -0700 (PDT) X-Received: by 2002:a5d:8c82:: with SMTP id g2mr3632711ion.34.1621492834046; Wed, 19 May 2021 23:40:34 -0700 (PDT) MIME-Version: 1.0 References: <20210518064215.2856977-1-tientzu@chromium.org> <20210518064215.2856977-2-tientzu@chromium.org> <170a54f2-be20-ec29-1d7f-3388e5f928c6@gmail.com> In-Reply-To: <170a54f2-be20-ec29-1d7f-3388e5f928c6@gmail.com> From: Claire Chang Date: Thu, 20 May 2021 14:40:23 +0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v7 01/15] swiotlb: Refactor swiotlb init functions To: Florian Fainelli Cc: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski , benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Tomasz Figa , bskeggs@redhat.com, Bjorn Helgaas , chris@chris-wilson.co.uk, Daniel Vetter , airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, Jianxiong Gao , joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 20, 2021 at 2:50 AM Florian Fainelli wrote: > > > > On 5/17/2021 11:42 PM, Claire Chang wrote: > > Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct > > initialization to make the code reusable. > > > > Note that we now also call set_memory_decrypted in swiotlb_init_with_tbl. > > > > Signed-off-by: Claire Chang > > --- > > kernel/dma/swiotlb.c | 51 ++++++++++++++++++++++---------------------- > > 1 file changed, 25 insertions(+), 26 deletions(-) > > > > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c > > index 8ca7d505d61c..d3232fc19385 100644 > > --- a/kernel/dma/swiotlb.c > > +++ b/kernel/dma/swiotlb.c > > @@ -168,9 +168,30 @@ void __init swiotlb_update_mem_attributes(void) > > memset(vaddr, 0, bytes); > > } > > > > -int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) > > +static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start, > > + unsigned long nslabs, bool late_alloc) > > { > > + void *vaddr = phys_to_virt(start); > > unsigned long bytes = nslabs << IO_TLB_SHIFT, i; > > + > > + mem->nslabs = nslabs; > > + mem->start = start; > > + mem->end = mem->start + bytes; > > + mem->index = 0; > > + mem->late_alloc = late_alloc; > > + spin_lock_init(&mem->lock); > > + for (i = 0; i < mem->nslabs; i++) { > > + mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i); > > + mem->slots[i].orig_addr = INVALID_PHYS_ADDR; > > + mem->slots[i].alloc_size = 0; > > + } > > + > > + set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT); > > + memset(vaddr, 0, bytes); > > You are doing an unconditional set_memory_decrypted() followed by a > memset here, and then: > > > +} > > + > > +int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) > > +{ > > struct io_tlb_mem *mem; > > size_t alloc_size; > > > > @@ -186,16 +207,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) > > if (!mem) > > panic("%s: Failed to allocate %zu bytes align=0x%lx\n", > > __func__, alloc_size, PAGE_SIZE); > > - mem->nslabs = nslabs; > > - mem->start = __pa(tlb); > > - mem->end = mem->start + bytes; > > - mem->index = 0; > > - spin_lock_init(&mem->lock); > > - for (i = 0; i < mem->nslabs; i++) { > > - mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i); > > - mem->slots[i].orig_addr = INVALID_PHYS_ADDR; > > - mem->slots[i].alloc_size = 0; > > - } > > + > > + swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false); > > You convert this call site with swiotlb_init_io_tlb_mem() which did not > do the set_memory_decrypted()+memset(). Is this okay or should > swiotlb_init_io_tlb_mem() add an additional argument to do this > conditionally? I'm actually not sure if this it okay. If not, will add an additional argument for it. > -- > Florian