From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 347C2C07E99 for ; Fri, 9 Jul 2021 10:27:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 193D9613CC for ; Fri, 9 Jul 2021 10:27:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232170AbhGIK3o (ORCPT ); Fri, 9 Jul 2021 06:29:44 -0400 Received: from foss.arm.com ([217.140.110.172]:49652 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232161AbhGIK3n (ORCPT ); Fri, 9 Jul 2021 06:29:43 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D53F5ED1; Fri, 9 Jul 2021 03:26:59 -0700 (PDT) Received: from [10.57.35.192] (unknown [10.57.35.192]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B37B33F5A1; Fri, 9 Jul 2021 03:26:58 -0700 (PDT) Subject: Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node To: Ming Lei , linux-nvme@lists.infradead.org, Will Deacon , linux-arm-kernel@lists.infradead.org, iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org References: From: Robin Murphy Message-ID: <23e7956b-f3b5-b585-3c18-724165994051@arm.com> Date: Fri, 9 Jul 2021 11:26:53 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021-07-09 09:38, Ming Lei wrote: > Hello, > > I observed that NVMe performance is very bad when running fio on one > CPU(aarch64) in remote numa node compared with the nvme pci numa node. > > Please see the test result[1] 327K vs. 34.9K. > > Latency trace shows that one big difference is in iommu_dma_unmap_sg(), > 1111 nsecs vs 25437 nsecs. Are you able to dig down further into that? iommu_dma_unmap_sg() itself doesn't do anything particularly special, so whatever makes a difference is probably happening at a lower level, and I suspect there's probably an SMMU involved. If for instance it turns out to go all the way down to __arm_smmu_cmdq_poll_until_consumed() because polling MMIO from the wrong node is slow, there's unlikely to be much you can do about that other than the global "go faster" knobs (iommu.strict and iommu.passthrough) with their associated compromises. Robin. > [1] fio test & results > > 1) fio test result: > > - run fio on local CPU > taskset -c 0 ~/git/tools/test/nvme/io_uring 10 1 /dev/nvme1n1 4k > + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri --iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 --filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread --name=test --group_reporting > > IOPS: 327K > avg latency of iommu_dma_unmap_sg(): 1111 nsecs > > > - run fio on remote CPU > taskset -c 80 ~/git/tools/test/nvme/io_uring 10 1 /dev/nvme1n1 4k > + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri --iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 --filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread --name=test --group_reporting > > IOPS: 34.9K > avg latency of iommu_dma_unmap_sg(): 25437 nsecs > > 2) system info > [root@ampere-mtjade-04 ~]# lscpu | grep NUMA > NUMA node(s): 2 > NUMA node0 CPU(s): 0-79 > NUMA node1 CPU(s): 80-159 > > lspci | grep NVMe > 0003:01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 > > [root@ampere-mtjade-04 ~]# cat /sys/block/nvme1n1/device/device/numa_node > 0 > > > > Thanks, > Ming > > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D117C07E99 for ; Fri, 9 Jul 2021 10:27:41 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F36D361248 for ; Fri, 9 Jul 2021 10:27:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F36D361248 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:Cc:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=PKuGO8KQXmaEfB59TsP0EmRqc6PLqPXmqR+tjwIo+nU=; b=SDNB6cveCIbmZ7QbADKrItUwc5 /emGVfttjm3Gik97XnLHEfs18BULJ1NM0mvDURZvSVRHdRiQQs/BLegOp/bhzIIIKENR65PjD6cKy I06OkcNwprX2pmUjlBg5lS/6BnVhYQiiYxSUvCE9sPVRnVyPsYhsQKuw7Iso5ipmm5U0xk13uWHci YfbYzgrJNV6kLwfFxkYe4/5TPfcPscJ1SaBEYyCAjc51DjGKB1YM6yB7Sh3tCC9BJh5s8Llz/nlrb DQBsgp1sWAyaNo3+5DYIgerYGEKM5eRkaY2M6AxvJOZl0uxlOgoDGT6q6exm8Fb1y8DnzUK7G7VDb S41OBXVw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m1njE-001Vc7-GN; Fri, 09 Jul 2021 10:27:28 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m1nip-001VSN-PJ; Fri, 09 Jul 2021 10:27:05 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D53F5ED1; Fri, 9 Jul 2021 03:26:59 -0700 (PDT) Received: from [10.57.35.192] (unknown [10.57.35.192]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B37B33F5A1; Fri, 9 Jul 2021 03:26:58 -0700 (PDT) Subject: Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node To: Ming Lei , linux-nvme@lists.infradead.org, Will Deacon , linux-arm-kernel@lists.infradead.org, iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org References: From: Robin Murphy Message-ID: <23e7956b-f3b5-b585-3c18-724165994051@arm.com> Date: Fri, 9 Jul 2021 11:26:53 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-GB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210709_032703_955427_618588F3 X-CRM114-Status: GOOD ( 15.69 ) X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org On 2021-07-09 09:38, Ming Lei wrote: > Hello, > > I observed that NVMe performance is very bad when running fio on one > CPU(aarch64) in remote numa node compared with the nvme pci numa node. > > Please see the test result[1] 327K vs. 34.9K. > > Latency trace shows that one big difference is in iommu_dma_unmap_sg(), > 1111 nsecs vs 25437 nsecs. Are you able to dig down further into that? iommu_dma_unmap_sg() itself doesn't do anything particularly special, so whatever makes a difference is probably happening at a lower level, and I suspect there's probably an SMMU involved. If for instance it turns out to go all the way down to __arm_smmu_cmdq_poll_until_consumed() because polling MMIO from the wrong node is slow, there's unlikely to be much you can do about that other than the global "go faster" knobs (iommu.strict and iommu.passthrough) with their associated compromises. Robin. > [1] fio test & results > > 1) fio test result: > > - run fio on local CPU > taskset -c 0 ~/git/tools/test/nvme/io_uring 10 1 /dev/nvme1n1 4k > + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri --iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 --filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread --name=test --group_reporting > > IOPS: 327K > avg latency of iommu_dma_unmap_sg(): 1111 nsecs > > > - run fio on remote CPU > taskset -c 80 ~/git/tools/test/nvme/io_uring 10 1 /dev/nvme1n1 4k > + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri --iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 --filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread --name=test --group_reporting > > IOPS: 34.9K > avg latency of iommu_dma_unmap_sg(): 25437 nsecs > > 2) system info > [root@ampere-mtjade-04 ~]# lscpu | grep NUMA > NUMA node(s): 2 > NUMA node0 CPU(s): 0-79 > NUMA node1 CPU(s): 80-159 > > lspci | grep NVMe > 0003:01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 > > [root@ampere-mtjade-04 ~]# cat /sys/block/nvme1n1/device/device/numa_node > 0 > > > > Thanks, > Ming > > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel > _______________________________________________ Linux-nvme mailing list Linux-nvme@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-nvme From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8686AC07E99 for ; Fri, 9 Jul 2021 10:27:05 +0000 (UTC) Received: from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1E987613CC for ; Fri, 9 Jul 2021 10:27:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1E987613CC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id DD36C6065A; Fri, 9 Jul 2021 10:27:04 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id O95UChIdboA2; Fri, 9 Jul 2021 10:27:04 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp3.osuosl.org (Postfix) with ESMTPS id A6BDA60649; Fri, 9 Jul 2021 10:27:03 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 7FBECC0010; Fri, 9 Jul 2021 10:27:03 +0000 (UTC) Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137]) by lists.linuxfoundation.org (Postfix) with ESMTP id 35DB0C000E for ; Fri, 9 Jul 2021 10:27:02 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 0B8A1421E9 for ; Fri, 9 Jul 2021 10:27:02 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id TcqnTr29D_UY for ; Fri, 9 Jul 2021 10:27:00 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp4.osuosl.org (Postfix) with ESMTP id B7BFE421E2 for ; Fri, 9 Jul 2021 10:27:00 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D53F5ED1; Fri, 9 Jul 2021 03:26:59 -0700 (PDT) Received: from [10.57.35.192] (unknown [10.57.35.192]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B37B33F5A1; Fri, 9 Jul 2021 03:26:58 -0700 (PDT) Subject: Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node To: Ming Lei , linux-nvme@lists.infradead.org, Will Deacon , linux-arm-kernel@lists.infradead.org, iommu@lists.linux-foundation.org References: From: Robin Murphy Message-ID: <23e7956b-f3b5-b585-3c18-724165994051@arm.com> Date: Fri, 9 Jul 2021 11:26:53 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-GB Cc: linux-kernel@vger.kernel.org X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: iommu-bounces@lists.linux-foundation.org Sender: "iommu" On 2021-07-09 09:38, Ming Lei wrote: > Hello, > > I observed that NVMe performance is very bad when running fio on one > CPU(aarch64) in remote numa node compared with the nvme pci numa node. > > Please see the test result[1] 327K vs. 34.9K. > > Latency trace shows that one big difference is in iommu_dma_unmap_sg(), > 1111 nsecs vs 25437 nsecs. Are you able to dig down further into that? iommu_dma_unmap_sg() itself doesn't do anything particularly special, so whatever makes a difference is probably happening at a lower level, and I suspect there's probably an SMMU involved. If for instance it turns out to go all the way down to __arm_smmu_cmdq_poll_until_consumed() because polling MMIO from the wrong node is slow, there's unlikely to be much you can do about that other than the global "go faster" knobs (iommu.strict and iommu.passthrough) with their associated compromises. Robin. > [1] fio test & results > > 1) fio test result: > > - run fio on local CPU > taskset -c 0 ~/git/tools/test/nvme/io_uring 10 1 /dev/nvme1n1 4k > + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri --iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 --filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread --name=test --group_reporting > > IOPS: 327K > avg latency of iommu_dma_unmap_sg(): 1111 nsecs > > > - run fio on remote CPU > taskset -c 80 ~/git/tools/test/nvme/io_uring 10 1 /dev/nvme1n1 4k > + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri --iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 --filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread --name=test --group_reporting > > IOPS: 34.9K > avg latency of iommu_dma_unmap_sg(): 25437 nsecs > > 2) system info > [root@ampere-mtjade-04 ~]# lscpu | grep NUMA > NUMA node(s): 2 > NUMA node0 CPU(s): 0-79 > NUMA node1 CPU(s): 80-159 > > lspci | grep NVMe > 0003:01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 > > [root@ampere-mtjade-04 ~]# cat /sys/block/nvme1n1/device/device/numa_node > 0 > > > > Thanks, > Ming > > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel > _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5F45C07E9C for ; Fri, 9 Jul 2021 10:28:40 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8E32161248 for ; Fri, 9 Jul 2021 10:28:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8E32161248 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:Cc:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=VPDoh3k/kPILoiabpUGuH0bVTrRZw9iA/6tgHT2+gMo=; b=EI1D87DaQlK5MlC3VC00Vrm60e ogz3kQ8lr7F6gchnAhtJd8590uVGq+grOcQC8DGUbxnHPZa/ProldxsYHfhcvEK30nSPHdYTFRuEw 8F8tpa2Nwwm8HimlYngp4JGS0IVDNbOEr578gsyATrEdCYHHta5jxatYDyinqyJmfDKEpKCW8Ur54 HW4Z34d3PmpUD8aUVE46cns4Gab/jGHbTrFmgpy3dzWQQRtKRhy3LJ4Qqrd9Y+KFl3PYG0yGSYcNg BC9KdnDqf0DUuvd1A6E26F6VvZP4XEV8D/t0EaiRNDk+6l7G002y7uaFJ9mHyl88B17IxOK79arTP XDw+Ssyw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m1nj0-001VWq-De; Fri, 09 Jul 2021 10:27:14 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m1nip-001VSN-PJ; Fri, 09 Jul 2021 10:27:05 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D53F5ED1; Fri, 9 Jul 2021 03:26:59 -0700 (PDT) Received: from [10.57.35.192] (unknown [10.57.35.192]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B37B33F5A1; Fri, 9 Jul 2021 03:26:58 -0700 (PDT) Subject: Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node To: Ming Lei , linux-nvme@lists.infradead.org, Will Deacon , linux-arm-kernel@lists.infradead.org, iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org References: From: Robin Murphy Message-ID: <23e7956b-f3b5-b585-3c18-724165994051@arm.com> Date: Fri, 9 Jul 2021 11:26:53 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-GB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210709_032703_955427_618588F3 X-CRM114-Status: GOOD ( 15.69 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 2021-07-09 09:38, Ming Lei wrote: > Hello, > > I observed that NVMe performance is very bad when running fio on one > CPU(aarch64) in remote numa node compared with the nvme pci numa node. > > Please see the test result[1] 327K vs. 34.9K. > > Latency trace shows that one big difference is in iommu_dma_unmap_sg(), > 1111 nsecs vs 25437 nsecs. Are you able to dig down further into that? iommu_dma_unmap_sg() itself doesn't do anything particularly special, so whatever makes a difference is probably happening at a lower level, and I suspect there's probably an SMMU involved. If for instance it turns out to go all the way down to __arm_smmu_cmdq_poll_until_consumed() because polling MMIO from the wrong node is slow, there's unlikely to be much you can do about that other than the global "go faster" knobs (iommu.strict and iommu.passthrough) with their associated compromises. Robin. > [1] fio test & results > > 1) fio test result: > > - run fio on local CPU > taskset -c 0 ~/git/tools/test/nvme/io_uring 10 1 /dev/nvme1n1 4k > + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri --iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 --filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread --name=test --group_reporting > > IOPS: 327K > avg latency of iommu_dma_unmap_sg(): 1111 nsecs > > > - run fio on remote CPU > taskset -c 80 ~/git/tools/test/nvme/io_uring 10 1 /dev/nvme1n1 4k > + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri --iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 --filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread --name=test --group_reporting > > IOPS: 34.9K > avg latency of iommu_dma_unmap_sg(): 25437 nsecs > > 2) system info > [root@ampere-mtjade-04 ~]# lscpu | grep NUMA > NUMA node(s): 2 > NUMA node0 CPU(s): 0-79 > NUMA node1 CPU(s): 80-159 > > lspci | grep NVMe > 0003:01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 > > [root@ampere-mtjade-04 ~]# cat /sys/block/nvme1n1/device/device/numa_node > 0 > > > > Thanks, > Ming > > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel