From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932320AbbBPKOg (ORCPT ); Mon, 16 Feb 2015 05:14:36 -0500 Received: from e23smtp08.au.ibm.com ([202.81.31.141]:56732 "EHLO e23smtp08.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754944AbbBPKH1 (ORCPT ); Mon, 16 Feb 2015 05:07:27 -0500 From: Alexey Kardashevskiy To: linuxppc-dev@lists.ozlabs.org Cc: Alexey Kardashevskiy , Benjamin Herrenschmidt , Paul Mackerras , Alex Williamson , Gavin Shan , Alexander Graf , linux-kernel@vger.kernel.org Subject: [PATCH v4 03/28] vfio: powerpc/spapr: Check that TCE page size is equal to it_page_size Date: Mon, 16 Feb 2015 21:05:55 +1100 Message-Id: <1424081180-4494-4-git-send-email-aik@ozlabs.ru> X-Mailer: git-send-email 2.0.0 In-Reply-To: <1424081180-4494-1-git-send-email-aik@ozlabs.ru> References: <1424081180-4494-1-git-send-email-aik@ozlabs.ru> X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15021610-0029-0000-0000-00000122305F Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This checks that the TCE table page size is not bigger that the size of a page we just pinned and going to put its physical address to the table. Otherwise the hardware gets unwanted access to physical memory between the end of the actual page and the end of the aligned up TCE page. Since compound_order() and compound_head() work correctly on non-huge pages, there is no need for additional check whether the page is huge. Signed-off-by: Alexey Kardashevskiy --- Changes: v4: * s/tce_check_page_size/tce_page_is_contained/ --- drivers/vfio/vfio_iommu_spapr_tce.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c index daf2e2c..ce896244 100644 --- a/drivers/vfio/vfio_iommu_spapr_tce.c +++ b/drivers/vfio/vfio_iommu_spapr_tce.c @@ -49,6 +49,22 @@ struct tce_container { bool enabled; }; +static bool tce_page_is_contained(struct page *page, unsigned page_shift) +{ + unsigned shift; + + /* + * Check that the TCE table granularity is not bigger than the size of + * a page we just found. Otherwise the hardware can get access to + * a bigger memory chunk that it should. + */ + shift = PAGE_SHIFT + compound_order(compound_head(page)); + if (shift >= page_shift) + return true; + + return false; +} + static int tce_iommu_enable(struct tce_container *container) { int ret = 0; @@ -209,6 +225,12 @@ static long tce_iommu_build(struct tce_container *container, ret = -EFAULT; break; } + + if (!tce_page_is_contained(page, tbl->it_page_shift)) { + ret = -EPERM; + break; + } + hva = (unsigned long) page_address(page) + (tce & IOMMU_PAGE_MASK(tbl) & ~PAGE_MASK); -- 2.0.0