From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AD77C48BDF for ; Fri, 18 Jun 2021 22:10:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 48E93610C7 for ; Fri, 18 Jun 2021 22:10:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234863AbhFRWMO (ORCPT ); Fri, 18 Jun 2021 18:12:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234858AbhFRWMN (ORCPT ); Fri, 18 Jun 2021 18:12:13 -0400 Received: from mail-qv1-xf2f.google.com (mail-qv1-xf2f.google.com [IPv6:2607:f8b0:4864:20::f2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21314C06175F for ; Fri, 18 Jun 2021 15:10:03 -0700 (PDT) Received: by mail-qv1-xf2f.google.com with SMTP id dj3so4346182qvb.11 for ; Fri, 18 Jun 2021 15:10:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=w4YiW4egPKdJECyPLrd6+VyczjYedyk/bEQ3LwbFh40=; b=kmgSmV1OQ5ftiq3+FvkzWjJQjPfeXPWXuZKSvT4MZJG4eugkNddA22USVB2l1Oscfa +w0Icj+IBTHRwhBa4kuvqXd97RFLazXj0fzsDSxeymKntKOlLCU+ClW68Gx8OVE4K4TR qLvxc2MGZKOKgfdqWSVacIeOAPQn4TwoKY/V0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=w4YiW4egPKdJECyPLrd6+VyczjYedyk/bEQ3LwbFh40=; b=tWZN7bFGNcM+KmGJrn33r666Xu2SWbz3WSB1U4HAPXdvFRlvIzKFA5MLg8lWiWn2P9 T1/s9gUaHEa+3/z2PLsTvXS6NJUymI3Z6eBzUeyAy57Dx1rvUZ5cTEzXrHnJkzdCABnT LGkkPoGUTKlpQ/Ad8hxgzsnKGcVu+r6Z3bicnjoTKLuZbqHiZ/qsSm1HLxDaSD7lA0bU mlv/S/X18LflE/CmEc8KbVKV8eoCdGp2XtelF6MHNZ7xHhNi5MWPwbI5X54ULtghwFeZ RSPV1uBMmz4vLLcU9Zd2OM5NpLFVktWzjrXT5ZOBZ1p7AWzXqW/7EJACRuHUn46Z8UlI /8sQ== X-Gm-Message-State: AOAM533czD76Oa42ma85kKT7//Zj/FUCco6ptj6RK/gF814RrCxHw1oV L1rmzJ6QTbKuGzDhPZxGkR1abNQTS9DXzw== X-Google-Smtp-Source: ABdhPJxZYR6vGKEXi17FsaNeYf9ou2Bqp+Hqe/j+93Wq0eKmWcXNWPJiEvoWZ5+sHHoZePLxMPlx7Q== X-Received: by 2002:ad4:4c43:: with SMTP id cs3mr4825950qvb.27.1624054201764; Fri, 18 Jun 2021 15:10:01 -0700 (PDT) Received: from mail-qk1-f170.google.com (mail-qk1-f170.google.com. [209.85.222.170]) by smtp.gmail.com with ESMTPSA id t184sm4979686qkf.63.2021.06.18.15.10.00 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 18 Jun 2021 15:10:00 -0700 (PDT) Received: by mail-qk1-f170.google.com with SMTP id c138so14578068qkg.5 for ; Fri, 18 Jun 2021 15:10:00 -0700 (PDT) X-Received: by 2002:a25:bcb:: with SMTP id 194mr17381339ybl.32.1624054200300; Fri, 18 Jun 2021 15:10:00 -0700 (PDT) MIME-Version: 1.0 References: <150fc7ab1c7f9b70a95dae1f4bc3b9018c0f9e04.1623981933.git.saiprakash.ranjan@codeaurora.org> In-Reply-To: <150fc7ab1c7f9b70a95dae1f4bc3b9018c0f9e04.1623981933.git.saiprakash.ranjan@codeaurora.org> From: Doug Anderson Date: Fri, 18 Jun 2021 15:09:48 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCHv2 2/3] iommu/io-pgtable: Optimize partial walk flush for large scatter-gather list To: Sai Prakash Ranjan , Robin Murphy Cc: Will Deacon , Joerg Roedel , "list@263.net:IOMMU DRIVERS , Joerg Roedel ," , Linux ARM , LKML , linux-arm-msm , Bjorn Andersson , Krishna Reddy , Thierry Reding , Tomasz Figa Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Hi, On Thu, Jun 17, 2021 at 7:51 PM Sai Prakash Ranjan wrote: > > Currently for iommu_unmap() of large scatter-gather list with page size > elements, the majority of time is spent in flushing of partial walks in > __arm_lpae_unmap() which is a VA based TLB invalidation invalidating > page-by-page on iommus like arm-smmu-v2 (TLBIVA) which do not support > range based invalidations like on arm-smmu-v3.2. > > For example: to unmap a 32MB scatter-gather list with page size elements > (8192 entries), there are 16->2MB buffer unmaps based on the pgsize (2MB > for 4K granule) and each of 2MB will further result in 512 TLBIVAs (2MB/4K) > resulting in a total of 8192 TLBIVAs (512*16) for 16->2MB causing a huge > overhead. > > So instead use tlb_flush_all() callback (TLBIALL/TLBIASID) to invalidate > the entire context for partial walk flush on select few platforms where > cost of over-invalidation is less than unmap latency It would probably be worth punching this description up a little bit. Elsewhere you said in more detail why this over-invalidation is less of a big deal for the Qualcomm SMMU. It's probably worth saying something like that here, too. Like this bit paraphrased from your other email: On qcom impl, we have several performance improvements for TLB cache invalidations in HW like wait-for-safe (for realtime clients such as camera and display) and few others to allow for cache lookups/updates when TLBI is in progress for the same context bank. > using the newly > introduced quirk IO_PGTABLE_QUIRK_TLB_INV_ALL. We also do this for > non-strict mode given its all about over-invalidation saving time on > individual unmaps and non-deterministic generally. As per usual I'm mostly clueless, but I don't quite understand why you want this new behavior for non-strict mode. To me it almost seems like the opposite? Specifically, non-strict mode is already outside the critical path today and so there's no need to optimize it. I'm probably not explaining myself clearly, but I guess i'm thinking: a) today for strict, unmap is in the critical path and it's important to get it out of there. Getting it out of the critical path is so important that we're willing to over-invalidate to speed up the critical path. b) today for non-strict, unmap is not in the critical path. So I would almost expect your patch to _disable_ your new feature for non-strict mappings, not auto-enable your new feature for non-strict mappings. If I'm babbling, feel free to ignore. ;-) Looking back, I guess Robin was the one that suggested the behavior you're implementing, so it's more likely he's right than I am. ;-) -Doug From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8515C48BDF for ; Fri, 18 Jun 2021 22:10:09 +0000 (UTC) Received: from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5CD15610C7 for ; Fri, 18 Jun 2021 22:10:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5CD15610C7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 1B87D83D30; Fri, 18 Jun 2021 22:10:09 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id KdfrbyTwYN9Y; Fri, 18 Jun 2021 22:10:08 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [IPv6:2605:bc80:3010:104::8cd3:938]) by smtp1.osuosl.org (Postfix) with ESMTPS id B708F83B18; Fri, 18 Jun 2021 22:10:07 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 70FA2C000D; Fri, 18 Jun 2021 22:10:07 +0000 (UTC) Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) by lists.linuxfoundation.org (Postfix) with ESMTP id CEB83C000B for ; Fri, 18 Jun 2021 22:10:05 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id B66CF83B1F for ; Fri, 18 Jun 2021 22:10:05 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id e8b3wsP557Tz for ; Fri, 18 Jun 2021 22:10:03 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.8.0 Received: from mail-qk1-x72d.google.com (mail-qk1-x72d.google.com [IPv6:2607:f8b0:4864:20::72d]) by smtp1.osuosl.org (Postfix) with ESMTPS id A90E083B18 for ; Fri, 18 Jun 2021 22:10:03 +0000 (UTC) Received: by mail-qk1-x72d.google.com with SMTP id j184so14568744qkd.6 for ; Fri, 18 Jun 2021 15:10:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=w4YiW4egPKdJECyPLrd6+VyczjYedyk/bEQ3LwbFh40=; b=kmgSmV1OQ5ftiq3+FvkzWjJQjPfeXPWXuZKSvT4MZJG4eugkNddA22USVB2l1Oscfa +w0Icj+IBTHRwhBa4kuvqXd97RFLazXj0fzsDSxeymKntKOlLCU+ClW68Gx8OVE4K4TR qLvxc2MGZKOKgfdqWSVacIeOAPQn4TwoKY/V0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=w4YiW4egPKdJECyPLrd6+VyczjYedyk/bEQ3LwbFh40=; b=n5Uq97s8X7CFBILkgKHcXBvA3QLqbFDG+zno2lNkQOHfaJkrTZsJPHjZbpQkN21Mgz 37rC3AASw1xzUI2qt2JrzApXE1Vcok1qYkbjuX5Q27EErWr1Qp28hRB4osAQsqQ28aoA 3afZ8vUYG/Y2zP00bWXmcvNaD7qTKFOXzORX5gEsp5mE0H0gwOT21lPeO1mH/E9FYtjz HFTZJKqsY5pMS7HAG9ZvMUSDbv1AJ7spEKtLrmXoDzsdGwHnziE2k6Ri3cJUCqsclM+k jjs9ISnyGLn5KK2zkywN/G+atdzE+b0nmnCxb8vTMpgaKm6t5n+42rZHB1M6R3dxTR0a mG0w== X-Gm-Message-State: AOAM5321DdpWuBPpedYJYSJREIzpZEtCyV2Rlo8cN5wxj/+VdSERtSMf TkpNl+32JeGjbA7H3yKkcZlTVLAzhzukSw== X-Google-Smtp-Source: ABdhPJywtrAKgMZr0emROkIZc1ghcYVAuESFPQiCgcmKrfX0gFhfkio404gQxhuGBozF9al1ARwsOw== X-Received: by 2002:a37:485:: with SMTP id 127mr11406687qke.277.1624054201850; Fri, 18 Jun 2021 15:10:01 -0700 (PDT) Received: from mail-qk1-f176.google.com (mail-qk1-f176.google.com. [209.85.222.176]) by smtp.gmail.com with ESMTPSA id h7sm4449575qko.81.2021.06.18.15.10.00 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 18 Jun 2021 15:10:00 -0700 (PDT) Received: by mail-qk1-f176.google.com with SMTP id w21so809486qkb.9 for ; Fri, 18 Jun 2021 15:10:00 -0700 (PDT) X-Received: by 2002:a25:bcb:: with SMTP id 194mr17381339ybl.32.1624054200300; Fri, 18 Jun 2021 15:10:00 -0700 (PDT) MIME-Version: 1.0 References: <150fc7ab1c7f9b70a95dae1f4bc3b9018c0f9e04.1623981933.git.saiprakash.ranjan@codeaurora.org> In-Reply-To: <150fc7ab1c7f9b70a95dae1f4bc3b9018c0f9e04.1623981933.git.saiprakash.ranjan@codeaurora.org> From: Doug Anderson Date: Fri, 18 Jun 2021 15:09:48 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCHv2 2/3] iommu/io-pgtable: Optimize partial walk flush for large scatter-gather list To: Sai Prakash Ranjan , Robin Murphy Cc: linux-arm-msm , LKML , "list@263.net:IOMMU DRIVERS , Joerg Roedel , " , Thierry Reding , Will Deacon , Linux ARM X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: iommu-bounces@lists.linux-foundation.org Sender: "iommu" Hi, On Thu, Jun 17, 2021 at 7:51 PM Sai Prakash Ranjan wrote: > > Currently for iommu_unmap() of large scatter-gather list with page size > elements, the majority of time is spent in flushing of partial walks in > __arm_lpae_unmap() which is a VA based TLB invalidation invalidating > page-by-page on iommus like arm-smmu-v2 (TLBIVA) which do not support > range based invalidations like on arm-smmu-v3.2. > > For example: to unmap a 32MB scatter-gather list with page size elements > (8192 entries), there are 16->2MB buffer unmaps based on the pgsize (2MB > for 4K granule) and each of 2MB will further result in 512 TLBIVAs (2MB/4K) > resulting in a total of 8192 TLBIVAs (512*16) for 16->2MB causing a huge > overhead. > > So instead use tlb_flush_all() callback (TLBIALL/TLBIASID) to invalidate > the entire context for partial walk flush on select few platforms where > cost of over-invalidation is less than unmap latency It would probably be worth punching this description up a little bit. Elsewhere you said in more detail why this over-invalidation is less of a big deal for the Qualcomm SMMU. It's probably worth saying something like that here, too. Like this bit paraphrased from your other email: On qcom impl, we have several performance improvements for TLB cache invalidations in HW like wait-for-safe (for realtime clients such as camera and display) and few others to allow for cache lookups/updates when TLBI is in progress for the same context bank. > using the newly > introduced quirk IO_PGTABLE_QUIRK_TLB_INV_ALL. We also do this for > non-strict mode given its all about over-invalidation saving time on > individual unmaps and non-deterministic generally. As per usual I'm mostly clueless, but I don't quite understand why you want this new behavior for non-strict mode. To me it almost seems like the opposite? Specifically, non-strict mode is already outside the critical path today and so there's no need to optimize it. I'm probably not explaining myself clearly, but I guess i'm thinking: a) today for strict, unmap is in the critical path and it's important to get it out of there. Getting it out of the critical path is so important that we're willing to over-invalidate to speed up the critical path. b) today for non-strict, unmap is not in the critical path. So I would almost expect your patch to _disable_ your new feature for non-strict mappings, not auto-enable your new feature for non-strict mappings. If I'm babbling, feel free to ignore. ;-) Looking back, I guess Robin was the one that suggested the behavior you're implementing, so it's more likely he's right than I am. ;-) -Doug _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 745BBC48BDF for ; Fri, 18 Jun 2021 22:11:52 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 400416127C for ; Fri, 18 Jun 2021 22:11:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 400416127C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=q4tlVi/3QZGel/bjBhMxXL9z3UlcxNG41biZRztRuWQ=; b=TMeBOZoBL7qHVV SM/HUKX7Q8T9EY8aSfRAWWJ/5jbJ4bRSzfCirregmOXorkksOapBvWU0P42qPTuCjf9P+lZhrmnR9 68txGk0vVEvrRmOP7GH470xRyjOJqCZnmK0KhLyQ5Z15HzSClcl/c0yD0iPd8OxpQdNmX9Fc/K9bW FkOdUYARNjYWytuEBEmRn6g9nnjqtutzIL1oY+5pAUZeu8X+CmSJfqm+CHoL70vxHwK1FAe8XqzHO l/Xpfs2bjpVKbn2mCwZphIPJ9UL8tV/og23sx2OJYfzVc3edL9IrRTEpMHP0W3LSfq+Q1jtWO53U1 EKrNAbfjxZJPkKRFQG/g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1luMgh-00FhXk-Fb; Fri, 18 Jun 2021 22:10:07 +0000 Received: from mail-qk1-x72d.google.com ([2607:f8b0:4864:20::72d]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1luMge-00FhXF-2P for linux-arm-kernel@lists.infradead.org; Fri, 18 Jun 2021 22:10:05 +0000 Received: by mail-qk1-x72d.google.com with SMTP id f70so14471470qke.13 for ; Fri, 18 Jun 2021 15:10:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=w4YiW4egPKdJECyPLrd6+VyczjYedyk/bEQ3LwbFh40=; b=kmgSmV1OQ5ftiq3+FvkzWjJQjPfeXPWXuZKSvT4MZJG4eugkNddA22USVB2l1Oscfa +w0Icj+IBTHRwhBa4kuvqXd97RFLazXj0fzsDSxeymKntKOlLCU+ClW68Gx8OVE4K4TR qLvxc2MGZKOKgfdqWSVacIeOAPQn4TwoKY/V0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=w4YiW4egPKdJECyPLrd6+VyczjYedyk/bEQ3LwbFh40=; b=fJsnL8tT7Nsx94ays7XCEZpl8v8PLJ3d37SSYbU7V6e1XFsqFGyFtkVgMY3xkqA8YL Sq49Ax+g6CqDpJButB8WXRKpgbY4Ps8NCkZ+QM9iKBG2Gpxdu422h3kOXJPYzh9017eL 6I2cPf6e7BERw6wM5E3tTqy9y0ypxpGwBcRlg/3c30509TDqEdaGVpYnDeDE5dE6/Jwm lqMH1E6iI7DG4Oujm0JScyDiSHtGTUcXX84+pkKFFUdrzR3E7z+fkNFusFhJcKNmGul3 x2e9WU7sv9EJ57qDlASgebp2bDw5I1Ds5YC1+He5J/dD29H/sw/z0o6r71ozUE3El4fY +o/w== X-Gm-Message-State: AOAM532FJdC7K+FSRY7TjDGh1JrDOybcZ1dD9HYkTvistcO7eRbUxmQs 7iwfj2sb8El/sqgKizXfJdXi27YBEIoVHQ== X-Google-Smtp-Source: ABdhPJw+FYg09XgICyn7eMz4HzwY0D4c2fWfRTTzPM937lAGV17DXyGUnC/U/ur0P3naEwcM90qYFg== X-Received: by 2002:a37:9d93:: with SMTP id g141mr11739159qke.350.1624054201424; Fri, 18 Jun 2021 15:10:01 -0700 (PDT) Received: from mail-qk1-f181.google.com (mail-qk1-f181.google.com. [209.85.222.181]) by smtp.gmail.com with ESMTPSA id x9sm6209859qtf.76.2021.06.18.15.10.00 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 18 Jun 2021 15:10:00 -0700 (PDT) Received: by mail-qk1-f181.google.com with SMTP id 3so4954871qks.8 for ; Fri, 18 Jun 2021 15:10:00 -0700 (PDT) X-Received: by 2002:a25:bcb:: with SMTP id 194mr17381339ybl.32.1624054200300; Fri, 18 Jun 2021 15:10:00 -0700 (PDT) MIME-Version: 1.0 References: <150fc7ab1c7f9b70a95dae1f4bc3b9018c0f9e04.1623981933.git.saiprakash.ranjan@codeaurora.org> In-Reply-To: <150fc7ab1c7f9b70a95dae1f4bc3b9018c0f9e04.1623981933.git.saiprakash.ranjan@codeaurora.org> From: Doug Anderson Date: Fri, 18 Jun 2021 15:09:48 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCHv2 2/3] iommu/io-pgtable: Optimize partial walk flush for large scatter-gather list To: Sai Prakash Ranjan , Robin Murphy X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210618_151004_175551_C2A675DA X-CRM114-Status: GOOD ( 19.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Tomasz Figa , linux-arm-msm , Joerg Roedel , LKML , Bjorn Andersson , "list@263.net:IOMMU DRIVERS , Joerg Roedel , " , Thierry Reding , Will Deacon , Linux ARM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi, On Thu, Jun 17, 2021 at 7:51 PM Sai Prakash Ranjan wrote: > > Currently for iommu_unmap() of large scatter-gather list with page size > elements, the majority of time is spent in flushing of partial walks in > __arm_lpae_unmap() which is a VA based TLB invalidation invalidating > page-by-page on iommus like arm-smmu-v2 (TLBIVA) which do not support > range based invalidations like on arm-smmu-v3.2. > > For example: to unmap a 32MB scatter-gather list with page size elements > (8192 entries), there are 16->2MB buffer unmaps based on the pgsize (2MB > for 4K granule) and each of 2MB will further result in 512 TLBIVAs (2MB/4K) > resulting in a total of 8192 TLBIVAs (512*16) for 16->2MB causing a huge > overhead. > > So instead use tlb_flush_all() callback (TLBIALL/TLBIASID) to invalidate > the entire context for partial walk flush on select few platforms where > cost of over-invalidation is less than unmap latency It would probably be worth punching this description up a little bit. Elsewhere you said in more detail why this over-invalidation is less of a big deal for the Qualcomm SMMU. It's probably worth saying something like that here, too. Like this bit paraphrased from your other email: On qcom impl, we have several performance improvements for TLB cache invalidations in HW like wait-for-safe (for realtime clients such as camera and display) and few others to allow for cache lookups/updates when TLBI is in progress for the same context bank. > using the newly > introduced quirk IO_PGTABLE_QUIRK_TLB_INV_ALL. We also do this for > non-strict mode given its all about over-invalidation saving time on > individual unmaps and non-deterministic generally. As per usual I'm mostly clueless, but I don't quite understand why you want this new behavior for non-strict mode. To me it almost seems like the opposite? Specifically, non-strict mode is already outside the critical path today and so there's no need to optimize it. I'm probably not explaining myself clearly, but I guess i'm thinking: a) today for strict, unmap is in the critical path and it's important to get it out of there. Getting it out of the critical path is so important that we're willing to over-invalidate to speed up the critical path. b) today for non-strict, unmap is not in the critical path. So I would almost expect your patch to _disable_ your new feature for non-strict mappings, not auto-enable your new feature for non-strict mappings. If I'm babbling, feel free to ignore. ;-) Looking back, I guess Robin was the one that suggested the behavior you're implementing, so it's more likely he's right than I am. ;-) -Doug _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel