From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A79EC433E8 for ; Wed, 22 Jul 2020 19:19:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 20BA620717 for ; Wed, 22 Jul 2020 19:19:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732687AbgGVTTw (ORCPT ); Wed, 22 Jul 2020 15:19:52 -0400 Received: from mga14.intel.com ([192.55.52.115]:65004 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732434AbgGVTTq (ORCPT ); Wed, 22 Jul 2020 15:19:46 -0400 IronPort-SDR: 30JlbIFVQDWBAIUcuvBC2HREhPB8v5pHKdjiEHvie8D5H2Kw5UAe+bbSQ7PmcdEafN5TWs2yJl vuU9dalMy9jQ== X-IronPort-AV: E=McAfee;i="6000,8403,9690"; a="149590056" X-IronPort-AV: E=Sophos;i="5.75,383,1589266800"; d="scan'208";a="149590056" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jul 2020 12:19:43 -0700 IronPort-SDR: u8dB6tCe6daCeeZUYnl2KskOg1cDgCIiizqLfBnJgP2LOUR+wEj9wfGbjiyvqUcw7zjHCoWBI2 PfZGbiG0ZHcg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,383,1589266800"; d="scan'208";a="362811901" Received: from jacob-builder.jf.intel.com ([10.7.199.155]) by orsmga001.jf.intel.com with ESMTP; 22 Jul 2020 12:19:43 -0700 From: Jacob Pan To: iommu@lists.linux-foundation.org, LKML , "Lu Baolu" , Joerg Roedel , David Woodhouse Cc: Yi Liu , "Tian, Kevin" , Raj Ashok , Eric Auger , Jacob Pan Subject: [PATCH v5 4/7] iommu/vt-d: Handle non-page aligned address Date: Wed, 22 Jul 2020 12:26:24 -0700 Message-Id: <1595445987-40095-5-git-send-email-jacob.jun.pan@linux.intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1595445987-40095-1-git-send-email-jacob.jun.pan@linux.intel.com> References: <1595445987-40095-1-git-send-email-jacob.jun.pan@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Liu Yi L Address information for device TLB invalidation comes from userspace when device is directly assigned to a guest with vIOMMU support. VT-d requires page aligned address. This patch checks and enforce address to be page aligned, otherwise reserved bits can be set in the invalidation descriptor. Unrecoverable fault will be reported due to non-zero value in the reserved bits. Fixes: 61a06a16e36d8 ("iommu/vt-d: Support flushing more translation cache types") Acked-by: Lu Baolu Signed-off-by: Liu Yi L Signed-off-by: Jacob Pan Reviewed-by: Eric Auger --- drivers/iommu/intel/dmar.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c index 9be08b9400ee..6038d02c5066 100644 --- a/drivers/iommu/intel/dmar.c +++ b/drivers/iommu/intel/dmar.c @@ -1456,9 +1456,25 @@ void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid, * Max Invs Pending (MIP) is set to 0 for now until we have DIT in * ECAP. */ - desc.qw1 |= addr & ~mask; - if (size_order) + if (addr & GENMASK_ULL(size_order + VTD_PAGE_SHIFT, 0)) + pr_warn_ratelimited("Invalidate non-aligned address %llx, order %d\n", addr, size_order); + + /* Take page address */ + desc.qw1 = QI_DEV_EIOTLB_ADDR(addr); + + if (size_order) { + /* + * Existing 0s in address below size_order may be the least + * significant bit, we must set them to 1s to avoid having + * smaller size than desired. + */ + desc.qw1 |= GENMASK_ULL(size_order + VTD_PAGE_SHIFT - 1, + VTD_PAGE_SHIFT); + /* Clear size_order bit to indicate size */ + desc.qw1 &= ~mask; + /* Set the S bit to indicate flushing more than 1 page */ desc.qw1 |= QI_DEV_EIOTLB_SIZE; + } qi_submit_sync(iommu, &desc, 1, 0); } -- 2.7.4