From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) (using TLSv1 with cipher CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 061DB1A1F3F for ; Wed, 7 Sep 2016 15:29:17 -0700 (PDT) Subject: [PATCH v2 2/2] mm: fix cache mode tracking in vm_insert_mixed() From: Dan Williams Date: Wed, 07 Sep 2016 15:26:19 -0700 Message-ID: <147328717909.35069.14256589123570653697.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <147328716869.35069.16311932814998156819.stgit@dwillia2-desk3.amr.corp.intel.com> References: <147328716869.35069.16311932814998156819.stgit@dwillia2-desk3.amr.corp.intel.com> MIME-Version: 1.0 List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" To: linux-nvdimm@lists.01.org Cc: Matthew Wilcox , David Airlie , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, akpm@linux-foundation.org List-ID: vm_insert_mixed() unlike vm_insert_pfn_prot() and vmf_insert_pfn_pmd(), fails to check the pgprot_t it uses for the mapping against the one recorded in the memtype tracking tree. Add the missing call to track_pfn_insert() to preclude cases where incompatible aliased mappings are established for a given physical address range. Cc: David Airlie Cc: dri-devel@lists.freedesktop.org Cc: Matthew Wilcox Cc: Andrew Morton Cc: Ross Zwisler Signed-off-by: Dan Williams --- mm/memory.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 83be99d9d8a1..8841fed328f9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1649,10 +1649,14 @@ EXPORT_SYMBOL(vm_insert_pfn_prot); int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr, pfn_t pfn) { + pgprot_t pgprot = vma->vm_page_prot; + BUG_ON(!(vma->vm_flags & VM_MIXEDMAP)); if (addr < vma->vm_start || addr >= vma->vm_end) return -EFAULT; + if (track_pfn_insert(vma, &pgprot, pfn)) + return -EINVAL; /* * If we don't have pte special, then we have to use the pfn_valid() @@ -1670,9 +1674,9 @@ int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr, * result in pfn_t_has_page() == false. */ page = pfn_to_page(pfn_t_to_pfn(pfn)); - return insert_page(vma, addr, page, vma->vm_page_prot); + return insert_page(vma, addr, page, pgprot); } - return insert_pfn(vma, addr, pfn, vma->vm_page_prot); + return insert_pfn(vma, addr, pfn, pgprot); } EXPORT_SYMBOL(vm_insert_mixed); _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752144AbcIGW3U (ORCPT ); Wed, 7 Sep 2016 18:29:20 -0400 Received: from mga03.intel.com ([134.134.136.65]:15859 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750823AbcIGW3S (ORCPT ); Wed, 7 Sep 2016 18:29:18 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.30,297,1470726000"; d="scan'208";a="5668505" Subject: [PATCH v2 2/2] mm: fix cache mode tracking in vm_insert_mixed() From: Dan Williams To: linux-nvdimm@ml01.01.org Cc: Matthew Wilcox , David Airlie , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, akpm@linux-foundation.org, Ross Zwisler Date: Wed, 07 Sep 2016 15:26:19 -0700 Message-ID: <147328717909.35069.14256589123570653697.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <147328716869.35069.16311932814998156819.stgit@dwillia2-desk3.amr.corp.intel.com> References: <147328716869.35069.16311932814998156819.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org vm_insert_mixed() unlike vm_insert_pfn_prot() and vmf_insert_pfn_pmd(), fails to check the pgprot_t it uses for the mapping against the one recorded in the memtype tracking tree. Add the missing call to track_pfn_insert() to preclude cases where incompatible aliased mappings are established for a given physical address range. Cc: David Airlie Cc: dri-devel@lists.freedesktop.org Cc: Matthew Wilcox Cc: Andrew Morton Cc: Ross Zwisler Signed-off-by: Dan Williams --- mm/memory.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 83be99d9d8a1..8841fed328f9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1649,10 +1649,14 @@ EXPORT_SYMBOL(vm_insert_pfn_prot); int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr, pfn_t pfn) { + pgprot_t pgprot = vma->vm_page_prot; + BUG_ON(!(vma->vm_flags & VM_MIXEDMAP)); if (addr < vma->vm_start || addr >= vma->vm_end) return -EFAULT; + if (track_pfn_insert(vma, &pgprot, pfn)) + return -EINVAL; /* * If we don't have pte special, then we have to use the pfn_valid() @@ -1670,9 +1674,9 @@ int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr, * result in pfn_t_has_page() == false. */ page = pfn_to_page(pfn_t_to_pfn(pfn)); - return insert_page(vma, addr, page, vma->vm_page_prot); + return insert_page(vma, addr, page, pgprot); } - return insert_pfn(vma, addr, pfn, vma->vm_page_prot); + return insert_pfn(vma, addr, pfn, pgprot); } EXPORT_SYMBOL(vm_insert_mixed); From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f70.google.com (mail-pa0-f70.google.com [209.85.220.70]) by kanga.kvack.org (Postfix) with ESMTP id 53C516B025E for ; Wed, 7 Sep 2016 18:56:44 -0400 (EDT) Received: by mail-pa0-f70.google.com with SMTP id ez1so63244431pab.1 for ; Wed, 07 Sep 2016 15:56:44 -0700 (PDT) Received: from mga14.intel.com (mga14.intel.com. [192.55.52.115]) by mx.google.com with ESMTPS id v9si29063338pab.64.2016.09.07.15.29.17 for (version=TLS1 cipher=AES128-SHA bits=128/128); Wed, 07 Sep 2016 15:29:18 -0700 (PDT) Subject: [PATCH v2 2/2] mm: fix cache mode tracking in vm_insert_mixed() From: Dan Williams Date: Wed, 07 Sep 2016 15:26:19 -0700 Message-ID: <147328717909.35069.14256589123570653697.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <147328716869.35069.16311932814998156819.stgit@dwillia2-desk3.amr.corp.intel.com> References: <147328716869.35069.16311932814998156819.stgit@dwillia2-desk3.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: linux-nvdimm@lists.01.org Cc: Matthew Wilcox , David Airlie , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, akpm@linux-foundation.org, Ross Zwisler vm_insert_mixed() unlike vm_insert_pfn_prot() and vmf_insert_pfn_pmd(), fails to check the pgprot_t it uses for the mapping against the one recorded in the memtype tracking tree. Add the missing call to track_pfn_insert() to preclude cases where incompatible aliased mappings are established for a given physical address range. Cc: David Airlie Cc: dri-devel@lists.freedesktop.org Cc: Matthew Wilcox Cc: Andrew Morton Cc: Ross Zwisler Signed-off-by: Dan Williams --- mm/memory.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 83be99d9d8a1..8841fed328f9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1649,10 +1649,14 @@ EXPORT_SYMBOL(vm_insert_pfn_prot); int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr, pfn_t pfn) { + pgprot_t pgprot = vma->vm_page_prot; + BUG_ON(!(vma->vm_flags & VM_MIXEDMAP)); if (addr < vma->vm_start || addr >= vma->vm_end) return -EFAULT; + if (track_pfn_insert(vma, &pgprot, pfn)) + return -EINVAL; /* * If we don't have pte special, then we have to use the pfn_valid() @@ -1670,9 +1674,9 @@ int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr, * result in pfn_t_has_page() == false. */ page = pfn_to_page(pfn_t_to_pfn(pfn)); - return insert_page(vma, addr, page, vma->vm_page_prot); + return insert_page(vma, addr, page, pgprot); } - return insert_pfn(vma, addr, pfn, vma->vm_page_prot); + return insert_pfn(vma, addr, pfn, pgprot); } EXPORT_SYMBOL(vm_insert_mixed); -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dan Williams Subject: [PATCH v2 2/2] mm: fix cache mode tracking in vm_insert_mixed() Date: Wed, 07 Sep 2016 15:26:19 -0700 Message-ID: <147328717909.35069.14256589123570653697.stgit@dwillia2-desk3.amr.corp.intel.com> References: <147328716869.35069.16311932814998156819.stgit@dwillia2-desk3.amr.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0645C6E29A for ; Wed, 7 Sep 2016 22:29:18 +0000 (UTC) In-Reply-To: <147328716869.35069.16311932814998156819.stgit@dwillia2-desk3.amr.corp.intel.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To: linux-nvdimm@lists.01.org Cc: Matthew Wilcox , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, akpm@linux-foundation.org, Ross Zwisler List-Id: dri-devel@lists.freedesktop.org dm1faW5zZXJ0X21peGVkKCkgdW5saWtlIHZtX2luc2VydF9wZm5fcHJvdCgpIGFuZCB2bWZfaW5z ZXJ0X3Bmbl9wbWQoKSwKZmFpbHMgdG8gY2hlY2sgdGhlIHBncHJvdF90IGl0IHVzZXMgZm9yIHRo ZSBtYXBwaW5nIGFnYWluc3QgdGhlIG9uZQpyZWNvcmRlZCBpbiB0aGUgbWVtdHlwZSB0cmFja2lu ZyB0cmVlLiAgQWRkIHRoZSBtaXNzaW5nIGNhbGwgdG8KdHJhY2tfcGZuX2luc2VydCgpIHRvIHBy ZWNsdWRlIGNhc2VzIHdoZXJlIGluY29tcGF0aWJsZSBhbGlhc2VkIG1hcHBpbmdzCmFyZSBlc3Rh Ymxpc2hlZCBmb3IgYSBnaXZlbiBwaHlzaWNhbCBhZGRyZXNzIHJhbmdlLgoKQ2M6IERhdmlkIEFp cmxpZSA8YWlybGllZEBsaW51eC5pZT4KQ2M6IGRyaS1kZXZlbEBsaXN0cy5mcmVlZGVza3RvcC5v cmcKQ2M6IE1hdHRoZXcgV2lsY294IDxtYXdpbGNveEBtaWNyb3NvZnQuY29tPgpDYzogQW5kcmV3 IE1vcnRvbiA8YWtwbUBsaW51eC1mb3VuZGF0aW9uLm9yZz4KQ2M6IFJvc3MgWndpc2xlciA8cm9z cy56d2lzbGVyQGxpbnV4LmludGVsLmNvbT4KU2lnbmVkLW9mZi1ieTogRGFuIFdpbGxpYW1zIDxk YW4uai53aWxsaWFtc0BpbnRlbC5jb20+Ci0tLQogbW0vbWVtb3J5LmMgfCAgICA4ICsrKysrKy0t CiAxIGZpbGUgY2hhbmdlZCwgNiBpbnNlcnRpb25zKCspLCAyIGRlbGV0aW9ucygtKQoKZGlmZiAt LWdpdCBhL21tL21lbW9yeS5jIGIvbW0vbWVtb3J5LmMKaW5kZXggODNiZTk5ZDlkOGExLi44ODQx ZmVkMzI4ZjkgMTAwNjQ0Ci0tLSBhL21tL21lbW9yeS5jCisrKyBiL21tL21lbW9yeS5jCkBAIC0x NjQ5LDEwICsxNjQ5LDE0IEBAIEVYUE9SVF9TWU1CT0wodm1faW5zZXJ0X3Bmbl9wcm90KTsKIGlu dCB2bV9pbnNlcnRfbWl4ZWQoc3RydWN0IHZtX2FyZWFfc3RydWN0ICp2bWEsIHVuc2lnbmVkIGxv bmcgYWRkciwKIAkJCXBmbl90IHBmbikKIHsKKwlwZ3Byb3RfdCBwZ3Byb3QgPSB2bWEtPnZtX3Bh Z2VfcHJvdDsKKwogCUJVR19PTighKHZtYS0+dm1fZmxhZ3MgJiBWTV9NSVhFRE1BUCkpOwogCiAJ aWYgKGFkZHIgPCB2bWEtPnZtX3N0YXJ0IHx8IGFkZHIgPj0gdm1hLT52bV9lbmQpCiAJCXJldHVy biAtRUZBVUxUOworCWlmICh0cmFja19wZm5faW5zZXJ0KHZtYSwgJnBncHJvdCwgcGZuKSkKKwkJ cmV0dXJuIC1FSU5WQUw7CiAKIAkvKgogCSAqIElmIHdlIGRvbid0IGhhdmUgcHRlIHNwZWNpYWws IHRoZW4gd2UgaGF2ZSB0byB1c2UgdGhlIHBmbl92YWxpZCgpCkBAIC0xNjcwLDkgKzE2NzQsOSBA QCBpbnQgdm1faW5zZXJ0X21peGVkKHN0cnVjdCB2bV9hcmVhX3N0cnVjdCAqdm1hLCB1bnNpZ25l ZCBsb25nIGFkZHIsCiAJCSAqIHJlc3VsdCBpbiBwZm5fdF9oYXNfcGFnZSgpID09IGZhbHNlLgog CQkgKi8KIAkJcGFnZSA9IHBmbl90b19wYWdlKHBmbl90X3RvX3BmbihwZm4pKTsKLQkJcmV0dXJu IGluc2VydF9wYWdlKHZtYSwgYWRkciwgcGFnZSwgdm1hLT52bV9wYWdlX3Byb3QpOworCQlyZXR1 cm4gaW5zZXJ0X3BhZ2Uodm1hLCBhZGRyLCBwYWdlLCBwZ3Byb3QpOwogCX0KLQlyZXR1cm4gaW5z ZXJ0X3Bmbih2bWEsIGFkZHIsIHBmbiwgdm1hLT52bV9wYWdlX3Byb3QpOworCXJldHVybiBpbnNl cnRfcGZuKHZtYSwgYWRkciwgcGZuLCBwZ3Byb3QpOwogfQogRVhQT1JUX1NZTUJPTCh2bV9pbnNl cnRfbWl4ZWQpOwogCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fXwpkcmktZGV2ZWwgbWFpbGluZyBsaXN0CmRyaS1kZXZlbEBsaXN0cy5mcmVlZGVza3RvcC5v cmcKaHR0cHM6Ly9saXN0cy5mcmVlZGVza3RvcC5vcmcvbWFpbG1hbi9saXN0aW5mby9kcmktZGV2 ZWwK