From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2508C433B4 for ; Wed, 14 Apr 2021 15:47:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9DAC26112F for ; Wed, 14 Apr 2021 15:47:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9DAC26112F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3A5256B0080; Wed, 14 Apr 2021 11:47:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 354F06B0081; Wed, 14 Apr 2021 11:47:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F2EE6B0082; Wed, 14 Apr 2021 11:47:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0068.hostedemail.com [216.40.44.68]) by kanga.kvack.org (Postfix) with ESMTP id 010EB6B0080 for ; Wed, 14 Apr 2021 11:47:33 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id AF7C65923BFB for ; Wed, 14 Apr 2021 15:47:33 +0000 (UTC) X-FDA: 78031402386.32.44B977B Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by imf12.hostedemail.com (Postfix) with ESMTP id ACE7CE7 for ; Wed, 14 Apr 2021 15:47:27 +0000 (UTC) Received: by verein.lst.de (Postfix, from userid 2407) id 0127168C7B; Wed, 14 Apr 2021 17:47:30 +0200 (CEST) Date: Wed, 14 Apr 2021 17:47:29 +0200 From: Christoph Hellwig To: Tianyu Lan Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, arnd@arndb.de, akpm@linux-foundation.org, gregkh@linuxfoundation.org, konrad.wilk@oracle.com, hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com, joro@8bytes.org, will@kernel.org, davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com, martin.petersen@oracle.com, Tianyu Lan , iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, thomas.lendacky@amd.com, brijesh.singh@amd.com, sunilmut@microsoft.com Subject: Re: [Resend RFC PATCH V2 10/12] HV/IOMMU: Add Hyper-V dma ops support Message-ID: <20210414154729.GD32045@lst.de> References: <20210414144945.3460554-1-ltykernel@gmail.com> <20210414144945.3460554-11-ltykernel@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210414144945.3460554-11-ltykernel@gmail.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: ACE7CE7 X-Stat-Signature: 81cftrokiqckizocropgozsxya3do5w8 Received-SPF: none (lst.de>: No applicable sender policy available) receiver=imf12; identity=mailfrom; envelope-from=""; helo=verein.lst.de; client-ip=213.95.11.211 X-HE-DKIM-Result: none/none X-HE-Tag: 1618415247-668920 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > +static dma_addr_t hyperv_map_page(struct device *dev, struct page *page, > + unsigned long offset, size_t size, > + enum dma_data_direction dir, > + unsigned long attrs) > +{ > + phys_addr_t map, phys = (page_to_pfn(page) << PAGE_SHIFT) + offset; > + > + if (!hv_is_isolation_supported()) > + return phys; > + > + map = swiotlb_tbl_map_single(dev, phys, size, HV_HYP_PAGE_SIZE, dir, > + attrs); > + if (map == (phys_addr_t)DMA_MAPPING_ERROR) > + return DMA_MAPPING_ERROR; > + > + return map; > +} This largerly duplicates what dma-direct + swiotlb does. Please use force_dma_unencrypted to force bounce buffering and just use the generic code. > + if (hv_isolation_type_snp()) { > + ret = hv_set_mem_host_visibility( > + phys_to_virt(hyperv_io_tlb_start), > + hyperv_io_tlb_size, > + VMBUS_PAGE_VISIBLE_READ_WRITE); > + if (ret) > + panic("%s: Fail to mark Hyper-v swiotlb buffer visible to host. err=%d\n", > + __func__, ret); > + > + hyperv_io_tlb_remap = ioremap_cache(hyperv_io_tlb_start > + + ms_hyperv.shared_gpa_boundary, > + hyperv_io_tlb_size); > + if (!hyperv_io_tlb_remap) > + panic("%s: Fail to remap io tlb.\n", __func__); > + > + memset(hyperv_io_tlb_remap, 0x00, hyperv_io_tlb_size); > + swiotlb_set_bounce_remap(hyperv_io_tlb_remap); And this really needs to go into a common hook where we currently just call set_memory_decrypted so that all the different schemes for these trusted VMs (we have about half a dozen now) can share code rather than reinventing it.