From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E86AC433DF for ; Wed, 1 Jul 2020 19:15:35 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D0E602071A for ; Wed, 1 Jul 2020 19:15:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Ctdpdm+e" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D0E602071A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Type: Content-Transfer-Encoding:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=tjo0IAPsrHfP6SYUj9g/JNS4IN9ZZCdlei9i/xaQAdI=; b=Ctdpdm+eV9yLfri/yXsCWOlPS Ezw1FecKUXsJ04LOvZ9ocXR0Tgg/g0ge+X76m6POqFRPNqhRXYuIp3QTPw6Nq5cBqU0XTMSG4iiiG ooKXk94C2ZfF6ICcXZzLODZ1bH7tH1HCgX+uN8Oy6RFTeK1/6tTRtm/6SKhcpIUmloqy8IPH5K4P3 N4FxCs1xoho5zvvOAgjHTf2rHRs0jRo7dbquziUUczdPPLPADVsGEkfT3cNZGHZTQXrhoVGS1TiXe R2fMjJbRigK1dhbWYj99AS4yHfCkBXfBqgbNtPGfNCBpCTWt6v39nXLhwR/0Y9GE1M9x/TLOu//py zBaVLaeLw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqiBX-0004XR-JZ; Wed, 01 Jul 2020 19:14:19 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqiBV-0004Wo-Tu for linux-arm-kernel@lists.infradead.org; Wed, 01 Jul 2020 19:14:18 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E38971FB; Wed, 1 Jul 2020 12:14:14 -0700 (PDT) Received: from [10.57.21.32] (unknown [10.57.21.32]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 71F743F68F; Wed, 1 Jul 2020 12:14:12 -0700 (PDT) Subject: Re: [PATCH v8 3/3] iommu/arm-smmu: Add global/context fault implementation hooks To: Krishna Reddy , Jonathan Hunter References: <20200630001051.12350-1-vdumpa@nvidia.com> <20200630001051.12350-4-vdumpa@nvidia.com> <4b4b20af-7baa-0987-e40d-af74235153f6@nvidia.com> <6c2ce909-c71b-351f-79f5-b1a4b4c0e4ac@arm.com> From: Robin Murphy Message-ID: <446ffe79-3a44-5d41-459f-b698a1cc361b@arm.com> Date: Wed, 1 Jul 2020 20:14:10 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-GB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200701_151418_089644_6190D4A1 X-CRM114-Status: GOOD ( 19.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Timo Alho , Thierry Reding , Bryan Huntsman , "linux-kernel@vger.kernel.org" , "iommu@lists.linux-foundation.org" , Mikko Perttunen , "nicoleotsuka@gmail.com" , Sachin Nikam , Nicolin Chen , "linux-tegra@vger.kernel.org" , Yu-Huan Hsu , Pritesh Raithatha , "will@kernel.org" , "linux-arm-kernel@lists.infradead.org" , Bitan Biswas Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 2020-07-01 19:48, Krishna Reddy wrote: >>>> + for (inst = 0; inst < nvidia_smmu->num_inst; inst++) { >>>> + irq_ret = nvidia_smmu_global_fault_inst(irq, smmu, inst); >>>> + if (irq_ret == IRQ_HANDLED) >>>> + return irq_ret; >>> >>> Any chance there could be more than one SMMU faulting by the time we >>> service the interrupt? > >> It certainly seems plausible if the interconnect is automatically load-balancing requests across the SMMU instances - say a driver bug caused a buffer to be unmapped too early, there could be many in-flight accesses to parts of that buffer that aren't all taking the same path and thus could now fault in parallel. >> [ And anyone inclined to nitpick global vs. context faults, s/unmap a buffer/tear down a domain/ ;) ] >> Either way I think it would be easier to reason about if we just handled these like a typical shared interrupt and always checked all the instances. > > It would be optimal to check at the same time across all instances. > >>>> + for (idx = 0; idx < smmu->num_context_banks; idx++) { >>>> + irq_ret = nvidia_smmu_context_fault_bank(irq, smmu, >>>> + idx, >>>> + inst); >>>> + >>>> + if (irq_ret == IRQ_HANDLED) >>>> + return irq_ret; >>> >>> Any reason why we don't check all banks? > >> As above, we certainly shouldn't bail out without checking the bank for the offending domain across all of its instances, and I guess the way this works means that we would have to iterate all the banks to achieve that. > > With shared irq line, the context fault identification is not optimal already. Reading all the context banks all the time can be additional mmio read overhead. But, it may not hurt the real use cases as these happen only when there are bugs. Right, I did ponder the idea of a whole programmatic "request_context_irq" hook that would allow registering the handler for both interrupts with the appropriate context bank and instance data, but since all interrupts are currently unexpected it seems somewhat hard to justify the extra complexity. Obviously we can revisit this in future if you want to start actually doing something with faults like the qcom GPU folks do. Robin. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel