From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 638C0C2BB55 for ; Mon, 13 Apr 2020 02:49:02 +0000 (UTC) Received: from vger.kernel.org (unknown [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 24CBB206C3 for ; Mon, 13 Apr 2020 02:49:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 24CBB206C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-pci-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728248AbgDMCtA (ORCPT ); Sun, 12 Apr 2020 22:49:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.18]:43778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727511AbgDMCtA (ORCPT ); Sun, 12 Apr 2020 22:49:00 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1072C0A3BE0 for ; Sun, 12 Apr 2020 19:49:00 -0700 (PDT) IronPort-SDR: nAu/EyUraFV4PSDm4jOYcxEbkREylvp6he89Mvhj0aF3onawNbaFeVdNwtU/6PcQUKmXXc/lPp AgPoQTOx/pZg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2020 19:49:00 -0700 IronPort-SDR: 8h2kJz9H2xByAm0Ehh0zIvn6ATdAcGQpeq1tHagj9rbqQVoUMOGf47be9xS1Z5eyn7z+wHJA8X d3LLcRluFVVw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,377,1580803200"; d="scan'208";a="362961161" Received: from blu2-mobl3.ccr.corp.intel.com (HELO [10.254.211.54]) ([10.254.211.54]) by fmsmga001.fm.intel.com with ESMTP; 12 Apr 2020 19:48:55 -0700 Cc: baolu.lu@linux.intel.com, Jon Derrick , Joerg Roedel , iommu@lists.linux-foundation.org, Bjorn Helgaas , Linux PCI , Linux Upstreaming Team Subject: Re: [PATCH 1/1] iommu/vt-d: use DMA domain for real DMA devices and subdevices To: Daniel Drake References: <20200409191736.6233-1-jonathan.derrick@intel.com> <20200409191736.6233-2-jonathan.derrick@intel.com> <09c98569-ed22-8886-3372-f5752334f8af@linux.intel.com> From: Lu Baolu Message-ID: <32cc4809-7029-bc5e-5a74-abbe43596e8d@linux.intel.com> Date: Mon, 13 Apr 2020 10:48:55 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Hi Daniel, On 2020/4/13 10:25, Daniel Drake wrote: > On Fri, Apr 10, 2020 at 9:22 AM Lu Baolu wrote: >> This is caused by the fragile private domain implementation. We are in >> process of removing it by enhancing the iommu subsystem with per-group >> default domain. >> >> https://www.spinics.net/lists/iommu/msg42976.html >> >> So ultimately VMD subdevices should have their own per-device iommu data >> and support per-device dma ops. > > Interesting. There's also this patchset you posted: > [PATCH 00/19] [PULL REQUEST] iommu/vt-d: patches for v5.7 > https://lists.linuxfoundation.org/pipermail/iommu/2020-April/042967.html > (to be pushed out to 5.8) Both are trying to solve a same problem. I have sync'ed with Joerg. This patch set will be replaced with Joerg's proposal due to a race concern between domain switching and driver binding. I will rebase all vt-d patches in this set on top of Joerg's change. Best regards, baolu > > In there you have: >> iommu/vt-d: Don't force 32bit devices to uses DMA domain > which seems to clash with the approach being explored in this thread. > > And: >> iommu/vt-d: Apply per-device dma_ops > This effectively solves the trip point that caused me to open these > discussions, where intel_map_page() -> iommu_need_mapping() would > incorrectly determine that a intel-iommu DMA mapping was needed for a > PCI subdevice running in identity mode. After this patch, a PCI > subdevice in identity mode uses the default system dma_ops and > completely avoids intel-iommu. > > So that solves the issues I was looking at. Jon, you might want to > check if the problems you see are likewise solved for you by these > patches. > > I didn't try Joerg's iommu group rework yet as it conflicts with those > patches above. > > Daniel >