From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id D72D5212AAB8C for ; Wed, 26 Jun 2019 05:27:49 -0700 (PDT) From: Christoph Hellwig Subject: [PATCH 04/25] mm: remove MEMORY_DEVICE_PUBLIC support Date: Wed, 26 Jun 2019 14:27:03 +0200 Message-Id: <20190626122724.13313-5-hch@lst.de> In-Reply-To: <20190626122724.13313-1-hch@lst.de> References: <20190626122724.13313-1-hch@lst.de> MIME-Version: 1.0 List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" To: Dan Williams , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Jason Gunthorpe , Ben Skeggs Cc: Michal Hocko , linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, nouveau@lists.freedesktop.org List-ID: The code hasn't been used since it was added to the tree, and doesn't appear to actually be usable. Signed-off-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe Acked-by: Michal Hocko --- include/linux/hmm.h | 4 ++-- include/linux/ioport.h | 1 - include/linux/memremap.h | 8 -------- include/linux/mm.h | 12 ------------ mm/Kconfig | 11 ----------- mm/gup.c | 7 ------- mm/hmm.c | 4 ++-- mm/memcontrol.c | 11 +++++------ mm/memory-failure.c | 6 +----- mm/memory.c | 34 ---------------------------------- mm/migrate.c | 26 +++----------------------- mm/swap.c | 11 ----------- 12 files changed, 13 insertions(+), 122 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 5c46b0f603fd..44a5ac738bb5 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -584,7 +584,7 @@ static inline void hmm_mm_destroy(struct mm_struct *mm) {} static inline void hmm_mm_init(struct mm_struct *mm) {} #endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */ -#if IS_ENABLED(CONFIG_DEVICE_PRIVATE) || IS_ENABLED(CONFIG_DEVICE_PUBLIC) +#if IS_ENABLED(CONFIG_DEVICE_PRIVATE) struct hmm_devmem; struct page *hmm_vma_alloc_locked_page(struct vm_area_struct *vma, @@ -748,7 +748,7 @@ static inline unsigned long hmm_devmem_page_get_drvdata(const struct page *page) { return page->hmm_data; } -#endif /* CONFIG_DEVICE_PRIVATE || CONFIG_DEVICE_PUBLIC */ +#endif /* CONFIG_DEVICE_PRIVATE */ #else /* IS_ENABLED(CONFIG_HMM) */ static inline void hmm_mm_destroy(struct mm_struct *mm) {} static inline void hmm_mm_init(struct mm_struct *mm) {} diff --git a/include/linux/ioport.h b/include/linux/ioport.h index da0ebaec25f0..dd961882bc74 100644 --- a/include/linux/ioport.h +++ b/include/linux/ioport.h @@ -132,7 +132,6 @@ enum { IORES_DESC_PERSISTENT_MEMORY = 4, IORES_DESC_PERSISTENT_MEMORY_LEGACY = 5, IORES_DESC_DEVICE_PRIVATE_MEMORY = 6, - IORES_DESC_DEVICE_PUBLIC_MEMORY = 7, }; /* helpers to define resources */ diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 1732dea030b2..995c62c5a48b 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -37,13 +37,6 @@ struct vmem_altmap { * A more complete discussion of unaddressable memory may be found in * include/linux/hmm.h and Documentation/vm/hmm.rst. * - * MEMORY_DEVICE_PUBLIC: - * Device memory that is cache coherent from device and CPU point of view. This - * is use on platform that have an advance system bus (like CAPI or CCIX). A - * driver can hotplug the device memory using ZONE_DEVICE and with that memory - * type. Any page of a process can be migrated to such memory. However no one - * should be allow to pin such memory so that it can always be evicted. - * * MEMORY_DEVICE_FS_DAX: * Host memory that has similar access semantics as System RAM i.e. DMA * coherent and supports page pinning. In support of coordinating page @@ -58,7 +51,6 @@ struct vmem_altmap { */ enum memory_type { MEMORY_DEVICE_PRIVATE = 1, - MEMORY_DEVICE_PUBLIC, MEMORY_DEVICE_FS_DAX, MEMORY_DEVICE_PCI_P2PDMA, }; diff --git a/include/linux/mm.h b/include/linux/mm.h index dd0b5f4e1e45..6e4b9be08b13 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -944,7 +944,6 @@ static inline bool put_devmap_managed_page(struct page *page) return false; switch (page->pgmap->type) { case MEMORY_DEVICE_PRIVATE: - case MEMORY_DEVICE_PUBLIC: case MEMORY_DEVICE_FS_DAX: __put_devmap_managed_page(page); return true; @@ -960,12 +959,6 @@ static inline bool is_device_private_page(const struct page *page) page->pgmap->type == MEMORY_DEVICE_PRIVATE; } -static inline bool is_device_public_page(const struct page *page) -{ - return is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_PUBLIC; -} - #ifdef CONFIG_PCI_P2PDMA static inline bool is_pci_p2pdma_page(const struct page *page) { @@ -998,11 +991,6 @@ static inline bool is_device_private_page(const struct page *page) return false; } -static inline bool is_device_public_page(const struct page *page) -{ - return false; -} - static inline bool is_pci_p2pdma_page(const struct page *page) { return false; diff --git a/mm/Kconfig b/mm/Kconfig index 0d2ba7e1f43e..6f35b85b3052 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -718,17 +718,6 @@ config DEVICE_PRIVATE memory; i.e., memory that is only accessible from the device (or group of devices). You likely also want to select HMM_MIRROR. -config DEVICE_PUBLIC - bool "Addressable device memory (like GPU memory)" - depends on ARCH_HAS_HMM - select HMM - select DEV_PAGEMAP_OPS - - help - Allows creation of struct pages to represent addressable device - memory; i.e., memory that is accessible from both the device and - the CPU - config FRAME_VECTOR bool diff --git a/mm/gup.c b/mm/gup.c index ddde097cf9e4..fe131d879c70 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -605,13 +605,6 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address, if ((gup_flags & FOLL_DUMP) || !is_zero_pfn(pte_pfn(*pte))) goto unmap; *page = pte_page(*pte); - - /* - * This should never happen (a device public page in the gate - * area). - */ - if (is_device_public_page(*page)) - goto unmap; } if (unlikely(!try_get_page(*page))) { ret = -ENOMEM; diff --git a/mm/hmm.c b/mm/hmm.c index bd260a3b6b09..376159a769fb 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -1331,7 +1331,7 @@ EXPORT_SYMBOL(hmm_range_dma_unmap); #endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */ -#if IS_ENABLED(CONFIG_DEVICE_PRIVATE) || IS_ENABLED(CONFIG_DEVICE_PUBLIC) +#if IS_ENABLED(CONFIG_DEVICE_PRIVATE) struct page *hmm_vma_alloc_locked_page(struct vm_area_struct *vma, unsigned long addr) { @@ -1478,4 +1478,4 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, return devmem; } EXPORT_SYMBOL_GPL(hmm_devmem_add); -#endif /* CONFIG_DEVICE_PRIVATE || CONFIG_DEVICE_PUBLIC */ +#endif /* CONFIG_DEVICE_PRIVATE */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ba9138a4a1de..fa844ae85bce 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -4994,8 +4994,8 @@ static int mem_cgroup_move_account(struct page *page, * 2(MC_TARGET_SWAP): if the swap entry corresponding to this pte is a * target for charge migration. if @target is not NULL, the entry is stored * in target->ent. - * 3(MC_TARGET_DEVICE): like MC_TARGET_PAGE but page is MEMORY_DEVICE_PUBLIC - * or MEMORY_DEVICE_PRIVATE (so ZONE_DEVICE page and thus not on the lru). + * 3(MC_TARGET_DEVICE): like MC_TARGET_PAGE but page is MEMORY_DEVICE_PRIVATE + * (so ZONE_DEVICE page and thus not on the lru). * For now we such page is charge like a regular page would be as for all * intent and purposes it is just special memory taking the place of a * regular page. @@ -5029,8 +5029,7 @@ static enum mc_target_type get_mctgt_type(struct vm_area_struct *vma, */ if (page->mem_cgroup == mc.from) { ret = MC_TARGET_PAGE; - if (is_device_private_page(page) || - is_device_public_page(page)) + if (is_device_private_page(page)) ret = MC_TARGET_DEVICE; if (target) target->page = page; @@ -5101,8 +5100,8 @@ static int mem_cgroup_count_precharge_pte_range(pmd_t *pmd, if (ptl) { /* * Note their can not be MC_TARGET_DEVICE for now as we do not - * support transparent huge page with MEMORY_DEVICE_PUBLIC or - * MEMORY_DEVICE_PRIVATE but this might change. + * support transparent huge page with MEMORY_DEVICE_PRIVATE but + * this might change. */ if (get_mctgt_type_thp(vma, addr, *pmd, NULL) == MC_TARGET_PAGE) mc.precharge += HPAGE_PMD_NR; diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 8da0334b9ca0..d9fc1a8bdf6a 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1177,16 +1177,12 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags, goto unlock; } - switch (pgmap->type) { - case MEMORY_DEVICE_PRIVATE: - case MEMORY_DEVICE_PUBLIC: + if (pgmap->type == MEMORY_DEVICE_PRIVATE) { /* * TODO: Handle HMM pages which may need coordination * with device-side memory. */ goto unlock; - default: - break; } /* diff --git a/mm/memory.c b/mm/memory.c index ddf20bd0c317..bd21e7063bf0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -585,29 +585,6 @@ struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr, return NULL; if (is_zero_pfn(pfn)) return NULL; - - /* - * Device public pages are special pages (they are ZONE_DEVICE - * pages but different from persistent memory). They behave - * allmost like normal pages. The difference is that they are - * not on the lru and thus should never be involve with any- - * thing that involve lru manipulation (mlock, numa balancing, - * ...). - * - * This is why we still want to return NULL for such page from - * vm_normal_page() so that we do not have to special case all - * call site of vm_normal_page(). - */ - if (likely(pfn <= highest_memmap_pfn)) { - struct page *page = pfn_to_page(pfn); - - if (is_device_public_page(page)) { - if (with_public_device) - return page; - return NULL; - } - } - if (pte_devmap(pte)) return NULL; @@ -797,17 +774,6 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, rss[mm_counter(page)]++; } else if (pte_devmap(pte)) { page = pte_page(pte); - - /* - * Cache coherent device memory behave like regular page and - * not like persistent memory page. For more informations see - * MEMORY_DEVICE_CACHE_COHERENT in memory_hotplug.h - */ - if (is_device_public_page(page)) { - get_page(page); - page_dup_rmap(page, false); - rss[mm_counter(page)]++; - } } out_set_pte: diff --git a/mm/migrate.c b/mm/migrate.c index f2ecc2855a12..149c692d5f9b 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -246,8 +246,6 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, if (is_device_private_page(new)) { entry = make_device_private_entry(new, pte_write(pte)); pte = swp_entry_to_pte(entry); - } else if (is_device_public_page(new)) { - pte = pte_mkdevmap(pte); } } @@ -381,7 +379,6 @@ static int expected_page_refs(struct address_space *mapping, struct page *page) * ZONE_DEVICE pages. */ expected_count += is_device_private_page(page); - expected_count += is_device_public_page(page); if (mapping) expected_count += hpage_nr_pages(page) + page_has_private(page); @@ -994,10 +991,7 @@ static int move_to_new_page(struct page *newpage, struct page *page, if (!PageMappingFlags(page)) page->mapping = NULL; - if (unlikely(is_zone_device_page(newpage))) { - if (is_device_public_page(newpage)) - flush_dcache_page(newpage); - } else + if (likely(!is_zone_device_page(newpage))) flush_dcache_page(newpage); } @@ -2406,16 +2400,7 @@ static bool migrate_vma_check_page(struct page *page) * FIXME proper solution is to rework migration_entry_wait() so * it does not need to take a reference on page. */ - if (is_device_private_page(page)) - return true; - - /* - * Only allow device public page to be migrated and account for - * the extra reference count imply by ZONE_DEVICE pages. - */ - if (!is_device_public_page(page)) - return false; - extra++; + return is_device_private_page(page); } /* For file back page */ @@ -2665,11 +2650,6 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, swp_entry = make_device_private_entry(page, vma->vm_flags & VM_WRITE); entry = swp_entry_to_pte(swp_entry); - } else if (is_device_public_page(page)) { - entry = pte_mkold(mk_pte(page, READ_ONCE(vma->vm_page_prot))); - if (vma->vm_flags & VM_WRITE) - entry = pte_mkwrite(pte_mkdirty(entry)); - entry = pte_mkdevmap(entry); } } else { entry = mk_pte(page, vma->vm_page_prot); @@ -2789,7 +2769,7 @@ static void migrate_vma_pages(struct migrate_vma *migrate) migrate->src[i] &= ~MIGRATE_PFN_MIGRATE; continue; } - } else if (!is_device_public_page(newpage)) { + } else { /* * Other types of ZONE_DEVICE page are not * supported. diff --git a/mm/swap.c b/mm/swap.c index 7ede3eddc12a..83107410d29f 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -740,17 +740,6 @@ void release_pages(struct page **pages, int nr) if (is_huge_zero_page(page)) continue; - /* Device public page can not be huge page */ - if (is_device_public_page(page)) { - if (locked_pgdat) { - spin_unlock_irqrestore(&locked_pgdat->lru_lock, - flags); - locked_pgdat = NULL; - } - put_devmap_managed_page(page); - continue; - } - page = compound_head(page); if (!put_page_testzero(page)) continue; -- 2.20.1 _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E10BEC48BD8 for ; Wed, 26 Jun 2019 12:27:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 994802133F for ; Wed, 26 Jun 2019 12:27:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="JI26dnXH" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727543AbfFZM14 (ORCPT ); Wed, 26 Jun 2019 08:27:56 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:42716 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727489AbfFZM1u (ORCPT ); Wed, 26 Jun 2019 08:27:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=IuSVDPBHudchWUkWljO4sDvSgYSGPjFvkYKeyIq9LMc=; b=JI26dnXHpGPmFcF+rpii1hZv3Z Ek0KGlwl/lxGCVUxUSyC/eHM6sl/iVonfU7vti0UMb7JDmvwQIuAiVVyFOYeGUfHdgjpEdqF3+iCL a7xG6ajTzxtHTForLNaEe2xJqgLzs1sCEkGqnGaePrdZeWzJajntlWjBj+/M9uyhkP/BE+InVWKq2 toX0aqRMU8tm+bgQa95bY3Y/xe2T0NBo1ymo91HHPou8tgm1RJMR+UIvcd3lYAoQIc0lSkjVNouQH JY+rBvuJYFPW8pUQdujdNPD9ZJap5kl7UgTHeJ0UMv690BqCG2MqrZ1mm6X5XDqeyZtJzLhMcUJEI KueE+atQ==; Received: from clnet-p19-102.ikbnet.co.at ([83.175.77.102] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hg71W-0001LR-EH; Wed, 26 Jun 2019 12:27:38 +0000 From: Christoph Hellwig To: Dan Williams , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Jason Gunthorpe , Ben Skeggs Cc: linux-mm@kvack.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Hocko Subject: [PATCH 04/25] mm: remove MEMORY_DEVICE_PUBLIC support Date: Wed, 26 Jun 2019 14:27:03 +0200 Message-Id: <20190626122724.13313-5-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190626122724.13313-1-hch@lst.de> References: <20190626122724.13313-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The code hasn't been used since it was added to the tree, and doesn't appear to actually be usable. Signed-off-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe Acked-by: Michal Hocko --- include/linux/hmm.h | 4 ++-- include/linux/ioport.h | 1 - include/linux/memremap.h | 8 -------- include/linux/mm.h | 12 ------------ mm/Kconfig | 11 ----------- mm/gup.c | 7 ------- mm/hmm.c | 4 ++-- mm/memcontrol.c | 11 +++++------ mm/memory-failure.c | 6 +----- mm/memory.c | 34 ---------------------------------- mm/migrate.c | 26 +++----------------------- mm/swap.c | 11 ----------- 12 files changed, 13 insertions(+), 122 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 5c46b0f603fd..44a5ac738bb5 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -584,7 +584,7 @@ static inline void hmm_mm_destroy(struct mm_struct *mm) {} static inline void hmm_mm_init(struct mm_struct *mm) {} #endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */ -#if IS_ENABLED(CONFIG_DEVICE_PRIVATE) || IS_ENABLED(CONFIG_DEVICE_PUBLIC) +#if IS_ENABLED(CONFIG_DEVICE_PRIVATE) struct hmm_devmem; struct page *hmm_vma_alloc_locked_page(struct vm_area_struct *vma, @@ -748,7 +748,7 @@ static inline unsigned long hmm_devmem_page_get_drvdata(const struct page *page) { return page->hmm_data; } -#endif /* CONFIG_DEVICE_PRIVATE || CONFIG_DEVICE_PUBLIC */ +#endif /* CONFIG_DEVICE_PRIVATE */ #else /* IS_ENABLED(CONFIG_HMM) */ static inline void hmm_mm_destroy(struct mm_struct *mm) {} static inline void hmm_mm_init(struct mm_struct *mm) {} diff --git a/include/linux/ioport.h b/include/linux/ioport.h index da0ebaec25f0..dd961882bc74 100644 --- a/include/linux/ioport.h +++ b/include/linux/ioport.h @@ -132,7 +132,6 @@ enum { IORES_DESC_PERSISTENT_MEMORY = 4, IORES_DESC_PERSISTENT_MEMORY_LEGACY = 5, IORES_DESC_DEVICE_PRIVATE_MEMORY = 6, - IORES_DESC_DEVICE_PUBLIC_MEMORY = 7, }; /* helpers to define resources */ diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 1732dea030b2..995c62c5a48b 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -37,13 +37,6 @@ struct vmem_altmap { * A more complete discussion of unaddressable memory may be found in * include/linux/hmm.h and Documentation/vm/hmm.rst. * - * MEMORY_DEVICE_PUBLIC: - * Device memory that is cache coherent from device and CPU point of view. This - * is use on platform that have an advance system bus (like CAPI or CCIX). A - * driver can hotplug the device memory using ZONE_DEVICE and with that memory - * type. Any page of a process can be migrated to such memory. However no one - * should be allow to pin such memory so that it can always be evicted. - * * MEMORY_DEVICE_FS_DAX: * Host memory that has similar access semantics as System RAM i.e. DMA * coherent and supports page pinning. In support of coordinating page @@ -58,7 +51,6 @@ struct vmem_altmap { */ enum memory_type { MEMORY_DEVICE_PRIVATE = 1, - MEMORY_DEVICE_PUBLIC, MEMORY_DEVICE_FS_DAX, MEMORY_DEVICE_PCI_P2PDMA, }; diff --git a/include/linux/mm.h b/include/linux/mm.h index dd0b5f4e1e45..6e4b9be08b13 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -944,7 +944,6 @@ static inline bool put_devmap_managed_page(struct page *page) return false; switch (page->pgmap->type) { case MEMORY_DEVICE_PRIVATE: - case MEMORY_DEVICE_PUBLIC: case MEMORY_DEVICE_FS_DAX: __put_devmap_managed_page(page); return true; @@ -960,12 +959,6 @@ static inline bool is_device_private_page(const struct page *page) page->pgmap->type == MEMORY_DEVICE_PRIVATE; } -static inline bool is_device_public_page(const struct page *page) -{ - return is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_PUBLIC; -} - #ifdef CONFIG_PCI_P2PDMA static inline bool is_pci_p2pdma_page(const struct page *page) { @@ -998,11 +991,6 @@ static inline bool is_device_private_page(const struct page *page) return false; } -static inline bool is_device_public_page(const struct page *page) -{ - return false; -} - static inline bool is_pci_p2pdma_page(const struct page *page) { return false; diff --git a/mm/Kconfig b/mm/Kconfig index 0d2ba7e1f43e..6f35b85b3052 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -718,17 +718,6 @@ config DEVICE_PRIVATE memory; i.e., memory that is only accessible from the device (or group of devices). You likely also want to select HMM_MIRROR. -config DEVICE_PUBLIC - bool "Addressable device memory (like GPU memory)" - depends on ARCH_HAS_HMM - select HMM - select DEV_PAGEMAP_OPS - - help - Allows creation of struct pages to represent addressable device - memory; i.e., memory that is accessible from both the device and - the CPU - config FRAME_VECTOR bool diff --git a/mm/gup.c b/mm/gup.c index ddde097cf9e4..fe131d879c70 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -605,13 +605,6 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address, if ((gup_flags & FOLL_DUMP) || !is_zero_pfn(pte_pfn(*pte))) goto unmap; *page = pte_page(*pte); - - /* - * This should never happen (a device public page in the gate - * area). - */ - if (is_device_public_page(*page)) - goto unmap; } if (unlikely(!try_get_page(*page))) { ret = -ENOMEM; diff --git a/mm/hmm.c b/mm/hmm.c index bd260a3b6b09..376159a769fb 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -1331,7 +1331,7 @@ EXPORT_SYMBOL(hmm_range_dma_unmap); #endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */ -#if IS_ENABLED(CONFIG_DEVICE_PRIVATE) || IS_ENABLED(CONFIG_DEVICE_PUBLIC) +#if IS_ENABLED(CONFIG_DEVICE_PRIVATE) struct page *hmm_vma_alloc_locked_page(struct vm_area_struct *vma, unsigned long addr) { @@ -1478,4 +1478,4 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, return devmem; } EXPORT_SYMBOL_GPL(hmm_devmem_add); -#endif /* CONFIG_DEVICE_PRIVATE || CONFIG_DEVICE_PUBLIC */ +#endif /* CONFIG_DEVICE_PRIVATE */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ba9138a4a1de..fa844ae85bce 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -4994,8 +4994,8 @@ static int mem_cgroup_move_account(struct page *page, * 2(MC_TARGET_SWAP): if the swap entry corresponding to this pte is a * target for charge migration. if @target is not NULL, the entry is stored * in target->ent. - * 3(MC_TARGET_DEVICE): like MC_TARGET_PAGE but page is MEMORY_DEVICE_PUBLIC - * or MEMORY_DEVICE_PRIVATE (so ZONE_DEVICE page and thus not on the lru). + * 3(MC_TARGET_DEVICE): like MC_TARGET_PAGE but page is MEMORY_DEVICE_PRIVATE + * (so ZONE_DEVICE page and thus not on the lru). * For now we such page is charge like a regular page would be as for all * intent and purposes it is just special memory taking the place of a * regular page. @@ -5029,8 +5029,7 @@ static enum mc_target_type get_mctgt_type(struct vm_area_struct *vma, */ if (page->mem_cgroup == mc.from) { ret = MC_TARGET_PAGE; - if (is_device_private_page(page) || - is_device_public_page(page)) + if (is_device_private_page(page)) ret = MC_TARGET_DEVICE; if (target) target->page = page; @@ -5101,8 +5100,8 @@ static int mem_cgroup_count_precharge_pte_range(pmd_t *pmd, if (ptl) { /* * Note their can not be MC_TARGET_DEVICE for now as we do not - * support transparent huge page with MEMORY_DEVICE_PUBLIC or - * MEMORY_DEVICE_PRIVATE but this might change. + * support transparent huge page with MEMORY_DEVICE_PRIVATE but + * this might change. */ if (get_mctgt_type_thp(vma, addr, *pmd, NULL) == MC_TARGET_PAGE) mc.precharge += HPAGE_PMD_NR; diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 8da0334b9ca0..d9fc1a8bdf6a 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1177,16 +1177,12 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags, goto unlock; } - switch (pgmap->type) { - case MEMORY_DEVICE_PRIVATE: - case MEMORY_DEVICE_PUBLIC: + if (pgmap->type == MEMORY_DEVICE_PRIVATE) { /* * TODO: Handle HMM pages which may need coordination * with device-side memory. */ goto unlock; - default: - break; } /* diff --git a/mm/memory.c b/mm/memory.c index ddf20bd0c317..bd21e7063bf0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -585,29 +585,6 @@ struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr, return NULL; if (is_zero_pfn(pfn)) return NULL; - - /* - * Device public pages are special pages (they are ZONE_DEVICE - * pages but different from persistent memory). They behave - * allmost like normal pages. The difference is that they are - * not on the lru and thus should never be involve with any- - * thing that involve lru manipulation (mlock, numa balancing, - * ...). - * - * This is why we still want to return NULL for such page from - * vm_normal_page() so that we do not have to special case all - * call site of vm_normal_page(). - */ - if (likely(pfn <= highest_memmap_pfn)) { - struct page *page = pfn_to_page(pfn); - - if (is_device_public_page(page)) { - if (with_public_device) - return page; - return NULL; - } - } - if (pte_devmap(pte)) return NULL; @@ -797,17 +774,6 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, rss[mm_counter(page)]++; } else if (pte_devmap(pte)) { page = pte_page(pte); - - /* - * Cache coherent device memory behave like regular page and - * not like persistent memory page. For more informations see - * MEMORY_DEVICE_CACHE_COHERENT in memory_hotplug.h - */ - if (is_device_public_page(page)) { - get_page(page); - page_dup_rmap(page, false); - rss[mm_counter(page)]++; - } } out_set_pte: diff --git a/mm/migrate.c b/mm/migrate.c index f2ecc2855a12..149c692d5f9b 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -246,8 +246,6 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, if (is_device_private_page(new)) { entry = make_device_private_entry(new, pte_write(pte)); pte = swp_entry_to_pte(entry); - } else if (is_device_public_page(new)) { - pte = pte_mkdevmap(pte); } } @@ -381,7 +379,6 @@ static int expected_page_refs(struct address_space *mapping, struct page *page) * ZONE_DEVICE pages. */ expected_count += is_device_private_page(page); - expected_count += is_device_public_page(page); if (mapping) expected_count += hpage_nr_pages(page) + page_has_private(page); @@ -994,10 +991,7 @@ static int move_to_new_page(struct page *newpage, struct page *page, if (!PageMappingFlags(page)) page->mapping = NULL; - if (unlikely(is_zone_device_page(newpage))) { - if (is_device_public_page(newpage)) - flush_dcache_page(newpage); - } else + if (likely(!is_zone_device_page(newpage))) flush_dcache_page(newpage); } @@ -2406,16 +2400,7 @@ static bool migrate_vma_check_page(struct page *page) * FIXME proper solution is to rework migration_entry_wait() so * it does not need to take a reference on page. */ - if (is_device_private_page(page)) - return true; - - /* - * Only allow device public page to be migrated and account for - * the extra reference count imply by ZONE_DEVICE pages. - */ - if (!is_device_public_page(page)) - return false; - extra++; + return is_device_private_page(page); } /* For file back page */ @@ -2665,11 +2650,6 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, swp_entry = make_device_private_entry(page, vma->vm_flags & VM_WRITE); entry = swp_entry_to_pte(swp_entry); - } else if (is_device_public_page(page)) { - entry = pte_mkold(mk_pte(page, READ_ONCE(vma->vm_page_prot))); - if (vma->vm_flags & VM_WRITE) - entry = pte_mkwrite(pte_mkdirty(entry)); - entry = pte_mkdevmap(entry); } } else { entry = mk_pte(page, vma->vm_page_prot); @@ -2789,7 +2769,7 @@ static void migrate_vma_pages(struct migrate_vma *migrate) migrate->src[i] &= ~MIGRATE_PFN_MIGRATE; continue; } - } else if (!is_device_public_page(newpage)) { + } else { /* * Other types of ZONE_DEVICE page are not * supported. diff --git a/mm/swap.c b/mm/swap.c index 7ede3eddc12a..83107410d29f 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -740,17 +740,6 @@ void release_pages(struct page **pages, int nr) if (is_huge_zero_page(page)) continue; - /* Device public page can not be huge page */ - if (is_device_public_page(page)) { - if (locked_pgdat) { - spin_unlock_irqrestore(&locked_pgdat->lru_lock, - flags); - locked_pgdat = NULL; - } - put_devmap_managed_page(page); - continue; - } - page = compound_head(page); if (!put_page_testzero(page)) continue; -- 2.20.1 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoph Hellwig Subject: [PATCH 04/25] mm: remove MEMORY_DEVICE_PUBLIC support Date: Wed, 26 Jun 2019 14:27:03 +0200 Message-ID: <20190626122724.13313-5-hch@lst.de> References: <20190626122724.13313-1-hch@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: <20190626122724.13313-1-hch-jcswGhMUV9g@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: nouveau-bounces-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org Sender: "Nouveau" To: Dan Williams , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Jason Gunthorpe , Ben Skeggs Cc: Michal Hocko , linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org, linux-pci-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, nouveau-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org List-Id: nouveau.vger.kernel.org VGhlIGNvZGUgaGFzbid0IGJlZW4gdXNlZCBzaW5jZSBpdCB3YXMgYWRkZWQgdG8gdGhlIHRyZWUs IGFuZCBkb2Vzbid0CmFwcGVhciB0byBhY3R1YWxseSBiZSB1c2FibGUuCgpTaWduZWQtb2ZmLWJ5 OiBDaHJpc3RvcGggSGVsbHdpZyA8aGNoQGxzdC5kZT4KUmV2aWV3ZWQtYnk6IEphc29uIEd1bnRo b3JwZSA8amdnQG1lbGxhbm94LmNvbT4KQWNrZWQtYnk6IE1pY2hhbCBIb2NrbyA8bWhvY2tvQHN1 c2UuY29tPgotLS0KIGluY2x1ZGUvbGludXgvaG1tLmggICAgICB8ICA0ICsrLS0KIGluY2x1ZGUv bGludXgvaW9wb3J0LmggICB8ICAxIC0KIGluY2x1ZGUvbGludXgvbWVtcmVtYXAuaCB8ICA4IC0t LS0tLS0tCiBpbmNsdWRlL2xpbnV4L21tLmggICAgICAgfCAxMiAtLS0tLS0tLS0tLS0KIG1tL0tj b25maWcgICAgICAgICAgICAgICB8IDExIC0tLS0tLS0tLS0tCiBtbS9ndXAuYyAgICAgICAgICAg ICAgICAgfCAgNyAtLS0tLS0tCiBtbS9obW0uYyAgICAgICAgICAgICAgICAgfCAgNCArKy0tCiBt bS9tZW1jb250cm9sLmMgICAgICAgICAgfCAxMSArKysrKy0tLS0tLQogbW0vbWVtb3J5LWZhaWx1 cmUuYyAgICAgIHwgIDYgKy0tLS0tCiBtbS9tZW1vcnkuYyAgICAgICAgICAgICAgfCAzNCAtLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiBtbS9taWdyYXRlLmMgICAgICAgICAgICAg fCAyNiArKystLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogbW0vc3dhcC5jICAgICAgICAgICAgICAg IHwgMTEgLS0tLS0tLS0tLS0KIDEyIGZpbGVzIGNoYW5nZWQsIDEzIGluc2VydGlvbnMoKyksIDEy MiBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9pbmNsdWRlL2xpbnV4L2htbS5oIGIvaW5jbHVk ZS9saW51eC9obW0uaAppbmRleCA1YzQ2YjBmNjAzZmQuLjQ0YTVhYzczOGJiNSAxMDA2NDQKLS0t IGEvaW5jbHVkZS9saW51eC9obW0uaAorKysgYi9pbmNsdWRlL2xpbnV4L2htbS5oCkBAIC01ODQs NyArNTg0LDcgQEAgc3RhdGljIGlubGluZSB2b2lkIGhtbV9tbV9kZXN0cm95KHN0cnVjdCBtbV9z dHJ1Y3QgKm1tKSB7fQogc3RhdGljIGlubGluZSB2b2lkIGhtbV9tbV9pbml0KHN0cnVjdCBtbV9z dHJ1Y3QgKm1tKSB7fQogI2VuZGlmIC8qIElTX0VOQUJMRUQoQ09ORklHX0hNTV9NSVJST1IpICov CiAKLSNpZiBJU19FTkFCTEVEKENPTkZJR19ERVZJQ0VfUFJJVkFURSkgfHwgIElTX0VOQUJMRUQo Q09ORklHX0RFVklDRV9QVUJMSUMpCisjaWYgSVNfRU5BQkxFRChDT05GSUdfREVWSUNFX1BSSVZB VEUpCiBzdHJ1Y3QgaG1tX2Rldm1lbTsKIAogc3RydWN0IHBhZ2UgKmhtbV92bWFfYWxsb2NfbG9j a2VkX3BhZ2Uoc3RydWN0IHZtX2FyZWFfc3RydWN0ICp2bWEsCkBAIC03NDgsNyArNzQ4LDcgQEAg c3RhdGljIGlubGluZSB1bnNpZ25lZCBsb25nIGhtbV9kZXZtZW1fcGFnZV9nZXRfZHJ2ZGF0YShj b25zdCBzdHJ1Y3QgcGFnZSAqcGFnZSkKIHsKIAlyZXR1cm4gcGFnZS0+aG1tX2RhdGE7CiB9Ci0j ZW5kaWYgLyogQ09ORklHX0RFVklDRV9QUklWQVRFIHx8IENPTkZJR19ERVZJQ0VfUFVCTElDICov CisjZW5kaWYgLyogQ09ORklHX0RFVklDRV9QUklWQVRFICovCiAjZWxzZSAvKiBJU19FTkFCTEVE KENPTkZJR19ITU0pICovCiBzdGF0aWMgaW5saW5lIHZvaWQgaG1tX21tX2Rlc3Ryb3koc3RydWN0 IG1tX3N0cnVjdCAqbW0pIHt9CiBzdGF0aWMgaW5saW5lIHZvaWQgaG1tX21tX2luaXQoc3RydWN0 IG1tX3N0cnVjdCAqbW0pIHt9CmRpZmYgLS1naXQgYS9pbmNsdWRlL2xpbnV4L2lvcG9ydC5oIGIv aW5jbHVkZS9saW51eC9pb3BvcnQuaAppbmRleCBkYTBlYmFlYzI1ZjAuLmRkOTYxODgyYmM3NCAx MDA2NDQKLS0tIGEvaW5jbHVkZS9saW51eC9pb3BvcnQuaAorKysgYi9pbmNsdWRlL2xpbnV4L2lv cG9ydC5oCkBAIC0xMzIsNyArMTMyLDYgQEAgZW51bSB7CiAJSU9SRVNfREVTQ19QRVJTSVNURU5U X01FTU9SWQkJPSA0LAogCUlPUkVTX0RFU0NfUEVSU0lTVEVOVF9NRU1PUllfTEVHQUNZCT0gNSwK IAlJT1JFU19ERVNDX0RFVklDRV9QUklWQVRFX01FTU9SWQk9IDYsCi0JSU9SRVNfREVTQ19ERVZJ Q0VfUFVCTElDX01FTU9SWQkJPSA3LAogfTsKIAogLyogaGVscGVycyB0byBkZWZpbmUgcmVzb3Vy Y2VzICovCmRpZmYgLS1naXQgYS9pbmNsdWRlL2xpbnV4L21lbXJlbWFwLmggYi9pbmNsdWRlL2xp bnV4L21lbXJlbWFwLmgKaW5kZXggMTczMmRlYTAzMGIyLi45OTVjNjJjNWE0OGIgMTAwNjQ0Ci0t LSBhL2luY2x1ZGUvbGludXgvbWVtcmVtYXAuaAorKysgYi9pbmNsdWRlL2xpbnV4L21lbXJlbWFw LmgKQEAgLTM3LDEzICszNyw2IEBAIHN0cnVjdCB2bWVtX2FsdG1hcCB7CiAgKiBBIG1vcmUgY29t cGxldGUgZGlzY3Vzc2lvbiBvZiB1bmFkZHJlc3NhYmxlIG1lbW9yeSBtYXkgYmUgZm91bmQgaW4K ICAqIGluY2x1ZGUvbGludXgvaG1tLmggYW5kIERvY3VtZW50YXRpb24vdm0vaG1tLnJzdC4KICAq Ci0gKiBNRU1PUllfREVWSUNFX1BVQkxJQzoKLSAqIERldmljZSBtZW1vcnkgdGhhdCBpcyBjYWNo ZSBjb2hlcmVudCBmcm9tIGRldmljZSBhbmQgQ1BVIHBvaW50IG9mIHZpZXcuIFRoaXMKLSAqIGlz IHVzZSBvbiBwbGF0Zm9ybSB0aGF0IGhhdmUgYW4gYWR2YW5jZSBzeXN0ZW0gYnVzIChsaWtlIENB UEkgb3IgQ0NJWCkuIEEKLSAqIGRyaXZlciBjYW4gaG90cGx1ZyB0aGUgZGV2aWNlIG1lbW9yeSB1 c2luZyBaT05FX0RFVklDRSBhbmQgd2l0aCB0aGF0IG1lbW9yeQotICogdHlwZS4gQW55IHBhZ2Ug b2YgYSBwcm9jZXNzIGNhbiBiZSBtaWdyYXRlZCB0byBzdWNoIG1lbW9yeS4gSG93ZXZlciBubyBv bmUKLSAqIHNob3VsZCBiZSBhbGxvdyB0byBwaW4gc3VjaCBtZW1vcnkgc28gdGhhdCBpdCBjYW4g YWx3YXlzIGJlIGV2aWN0ZWQuCi0gKgogICogTUVNT1JZX0RFVklDRV9GU19EQVg6CiAgKiBIb3N0 IG1lbW9yeSB0aGF0IGhhcyBzaW1pbGFyIGFjY2VzcyBzZW1hbnRpY3MgYXMgU3lzdGVtIFJBTSBp LmUuIERNQQogICogY29oZXJlbnQgYW5kIHN1cHBvcnRzIHBhZ2UgcGlubmluZy4gSW4gc3VwcG9y dCBvZiBjb29yZGluYXRpbmcgcGFnZQpAQCAtNTgsNyArNTEsNiBAQCBzdHJ1Y3Qgdm1lbV9hbHRt YXAgewogICovCiBlbnVtIG1lbW9yeV90eXBlIHsKIAlNRU1PUllfREVWSUNFX1BSSVZBVEUgPSAx LAotCU1FTU9SWV9ERVZJQ0VfUFVCTElDLAogCU1FTU9SWV9ERVZJQ0VfRlNfREFYLAogCU1FTU9S WV9ERVZJQ0VfUENJX1AyUERNQSwKIH07CmRpZmYgLS1naXQgYS9pbmNsdWRlL2xpbnV4L21tLmgg Yi9pbmNsdWRlL2xpbnV4L21tLmgKaW5kZXggZGQwYjVmNGUxZTQ1Li42ZTRiOWJlMDhiMTMgMTAw NjQ0Ci0tLSBhL2luY2x1ZGUvbGludXgvbW0uaAorKysgYi9pbmNsdWRlL2xpbnV4L21tLmgKQEAg LTk0NCw3ICs5NDQsNiBAQCBzdGF0aWMgaW5saW5lIGJvb2wgcHV0X2Rldm1hcF9tYW5hZ2VkX3Bh Z2Uoc3RydWN0IHBhZ2UgKnBhZ2UpCiAJCXJldHVybiBmYWxzZTsKIAlzd2l0Y2ggKHBhZ2UtPnBn bWFwLT50eXBlKSB7CiAJY2FzZSBNRU1PUllfREVWSUNFX1BSSVZBVEU6Ci0JY2FzZSBNRU1PUllf REVWSUNFX1BVQkxJQzoKIAljYXNlIE1FTU9SWV9ERVZJQ0VfRlNfREFYOgogCQlfX3B1dF9kZXZt YXBfbWFuYWdlZF9wYWdlKHBhZ2UpOwogCQlyZXR1cm4gdHJ1ZTsKQEAgLTk2MCwxMiArOTU5LDYg QEAgc3RhdGljIGlubGluZSBib29sIGlzX2RldmljZV9wcml2YXRlX3BhZ2UoY29uc3Qgc3RydWN0 IHBhZ2UgKnBhZ2UpCiAJCXBhZ2UtPnBnbWFwLT50eXBlID09IE1FTU9SWV9ERVZJQ0VfUFJJVkFU RTsKIH0KIAotc3RhdGljIGlubGluZSBib29sIGlzX2RldmljZV9wdWJsaWNfcGFnZShjb25zdCBz dHJ1Y3QgcGFnZSAqcGFnZSkKLXsKLQlyZXR1cm4gaXNfem9uZV9kZXZpY2VfcGFnZShwYWdlKSAm JgotCQlwYWdlLT5wZ21hcC0+dHlwZSA9PSBNRU1PUllfREVWSUNFX1BVQkxJQzsKLX0KLQogI2lm ZGVmIENPTkZJR19QQ0lfUDJQRE1BCiBzdGF0aWMgaW5saW5lIGJvb2wgaXNfcGNpX3AycGRtYV9w YWdlKGNvbnN0IHN0cnVjdCBwYWdlICpwYWdlKQogewpAQCAtOTk4LDExICs5OTEsNiBAQCBzdGF0 aWMgaW5saW5lIGJvb2wgaXNfZGV2aWNlX3ByaXZhdGVfcGFnZShjb25zdCBzdHJ1Y3QgcGFnZSAq cGFnZSkKIAlyZXR1cm4gZmFsc2U7CiB9CiAKLXN0YXRpYyBpbmxpbmUgYm9vbCBpc19kZXZpY2Vf cHVibGljX3BhZ2UoY29uc3Qgc3RydWN0IHBhZ2UgKnBhZ2UpCi17Ci0JcmV0dXJuIGZhbHNlOwot fQotCiBzdGF0aWMgaW5saW5lIGJvb2wgaXNfcGNpX3AycGRtYV9wYWdlKGNvbnN0IHN0cnVjdCBw YWdlICpwYWdlKQogewogCXJldHVybiBmYWxzZTsKZGlmZiAtLWdpdCBhL21tL0tjb25maWcgYi9t bS9LY29uZmlnCmluZGV4IDBkMmJhN2UxZjQzZS4uNmYzNWI4NWIzMDUyIDEwMDY0NAotLS0gYS9t bS9LY29uZmlnCisrKyBiL21tL0tjb25maWcKQEAgLTcxOCwxNyArNzE4LDYgQEAgY29uZmlnIERF VklDRV9QUklWQVRFCiAJICBtZW1vcnk7IGkuZS4sIG1lbW9yeSB0aGF0IGlzIG9ubHkgYWNjZXNz aWJsZSBmcm9tIHRoZSBkZXZpY2UgKG9yCiAJICBncm91cCBvZiBkZXZpY2VzKS4gWW91IGxpa2Vs eSBhbHNvIHdhbnQgdG8gc2VsZWN0IEhNTV9NSVJST1IuCiAKLWNvbmZpZyBERVZJQ0VfUFVCTElD Ci0JYm9vbCAiQWRkcmVzc2FibGUgZGV2aWNlIG1lbW9yeSAobGlrZSBHUFUgbWVtb3J5KSIKLQlk ZXBlbmRzIG9uIEFSQ0hfSEFTX0hNTQotCXNlbGVjdCBITU0KLQlzZWxlY3QgREVWX1BBR0VNQVBf T1BTCi0KLQloZWxwCi0JICBBbGxvd3MgY3JlYXRpb24gb2Ygc3RydWN0IHBhZ2VzIHRvIHJlcHJl c2VudCBhZGRyZXNzYWJsZSBkZXZpY2UKLQkgIG1lbW9yeTsgaS5lLiwgbWVtb3J5IHRoYXQgaXMg YWNjZXNzaWJsZSBmcm9tIGJvdGggdGhlIGRldmljZSBhbmQKLQkgIHRoZSBDUFUKLQogY29uZmln IEZSQU1FX1ZFQ1RPUgogCWJvb2wKIApkaWZmIC0tZ2l0IGEvbW0vZ3VwLmMgYi9tbS9ndXAuYwpp bmRleCBkZGRlMDk3Y2Y5ZTQuLmZlMTMxZDg3OWM3MCAxMDA2NDQKLS0tIGEvbW0vZ3VwLmMKKysr IGIvbW0vZ3VwLmMKQEAgLTYwNSwxMyArNjA1LDYgQEAgc3RhdGljIGludCBnZXRfZ2F0ZV9wYWdl KHN0cnVjdCBtbV9zdHJ1Y3QgKm1tLCB1bnNpZ25lZCBsb25nIGFkZHJlc3MsCiAJCWlmICgoZ3Vw X2ZsYWdzICYgRk9MTF9EVU1QKSB8fCAhaXNfemVyb19wZm4ocHRlX3BmbigqcHRlKSkpCiAJCQln b3RvIHVubWFwOwogCQkqcGFnZSA9IHB0ZV9wYWdlKCpwdGUpOwotCi0JCS8qCi0JCSAqIFRoaXMg c2hvdWxkIG5ldmVyIGhhcHBlbiAoYSBkZXZpY2UgcHVibGljIHBhZ2UgaW4gdGhlIGdhdGUKLQkJ ICogYXJlYSkuCi0JCSAqLwotCQlpZiAoaXNfZGV2aWNlX3B1YmxpY19wYWdlKCpwYWdlKSkKLQkJ CWdvdG8gdW5tYXA7CiAJfQogCWlmICh1bmxpa2VseSghdHJ5X2dldF9wYWdlKCpwYWdlKSkpIHsK IAkJcmV0ID0gLUVOT01FTTsKZGlmZiAtLWdpdCBhL21tL2htbS5jIGIvbW0vaG1tLmMKaW5kZXgg YmQyNjBhM2I2YjA5Li4zNzYxNTlhNzY5ZmIgMTAwNjQ0Ci0tLSBhL21tL2htbS5jCisrKyBiL21t L2htbS5jCkBAIC0xMzMxLDcgKzEzMzEsNyBAQCBFWFBPUlRfU1lNQk9MKGhtbV9yYW5nZV9kbWFf dW5tYXApOwogI2VuZGlmIC8qIElTX0VOQUJMRUQoQ09ORklHX0hNTV9NSVJST1IpICovCiAKIAot I2lmIElTX0VOQUJMRUQoQ09ORklHX0RFVklDRV9QUklWQVRFKSB8fCAgSVNfRU5BQkxFRChDT05G SUdfREVWSUNFX1BVQkxJQykKKyNpZiBJU19FTkFCTEVEKENPTkZJR19ERVZJQ0VfUFJJVkFURSkK IHN0cnVjdCBwYWdlICpobW1fdm1hX2FsbG9jX2xvY2tlZF9wYWdlKHN0cnVjdCB2bV9hcmVhX3N0 cnVjdCAqdm1hLAogCQkJCSAgICAgICB1bnNpZ25lZCBsb25nIGFkZHIpCiB7CkBAIC0xNDc4LDQg KzE0NzgsNCBAQCBzdHJ1Y3QgaG1tX2Rldm1lbSAqaG1tX2Rldm1lbV9hZGQoY29uc3Qgc3RydWN0 IGhtbV9kZXZtZW1fb3BzICpvcHMsCiAJcmV0dXJuIGRldm1lbTsKIH0KIEVYUE9SVF9TWU1CT0xf R1BMKGhtbV9kZXZtZW1fYWRkKTsKLSNlbmRpZiAvKiBDT05GSUdfREVWSUNFX1BSSVZBVEUgfHwg Q09ORklHX0RFVklDRV9QVUJMSUMgKi8KKyNlbmRpZiAvKiBDT05GSUdfREVWSUNFX1BSSVZBVEUg ICovCmRpZmYgLS1naXQgYS9tbS9tZW1jb250cm9sLmMgYi9tbS9tZW1jb250cm9sLmMKaW5kZXgg YmE5MTM4YTRhMWRlLi5mYTg0NGFlODViY2UgMTAwNjQ0Ci0tLSBhL21tL21lbWNvbnRyb2wuYwor KysgYi9tbS9tZW1jb250cm9sLmMKQEAgLTQ5OTQsOCArNDk5NCw4IEBAIHN0YXRpYyBpbnQgbWVt X2Nncm91cF9tb3ZlX2FjY291bnQoc3RydWN0IHBhZ2UgKnBhZ2UsCiAgKiAgIDIoTUNfVEFSR0VU X1NXQVApOiBpZiB0aGUgc3dhcCBlbnRyeSBjb3JyZXNwb25kaW5nIHRvIHRoaXMgcHRlIGlzIGEK ICAqICAgICB0YXJnZXQgZm9yIGNoYXJnZSBtaWdyYXRpb24uIGlmIEB0YXJnZXQgaXMgbm90IE5V TEwsIHRoZSBlbnRyeSBpcyBzdG9yZWQKICAqICAgICBpbiB0YXJnZXQtPmVudC4KLSAqICAgMyhN Q19UQVJHRVRfREVWSUNFKTogbGlrZSBNQ19UQVJHRVRfUEFHRSAgYnV0IHBhZ2UgaXMgTUVNT1JZ X0RFVklDRV9QVUJMSUMKLSAqICAgICBvciBNRU1PUllfREVWSUNFX1BSSVZBVEUgKHNvIFpPTkVf REVWSUNFIHBhZ2UgYW5kIHRodXMgbm90IG9uIHRoZSBscnUpLgorICogICAzKE1DX1RBUkdFVF9E RVZJQ0UpOiBsaWtlIE1DX1RBUkdFVF9QQUdFICBidXQgcGFnZSBpcyBNRU1PUllfREVWSUNFX1BS SVZBVEUKKyAqICAgICAoc28gWk9ORV9ERVZJQ0UgcGFnZSBhbmQgdGh1cyBub3Qgb24gdGhlIGxy dSkuCiAgKiAgICAgRm9yIG5vdyB3ZSBzdWNoIHBhZ2UgaXMgY2hhcmdlIGxpa2UgYSByZWd1bGFy IHBhZ2Ugd291bGQgYmUgYXMgZm9yIGFsbAogICogICAgIGludGVudCBhbmQgcHVycG9zZXMgaXQg aXMganVzdCBzcGVjaWFsIG1lbW9yeSB0YWtpbmcgdGhlIHBsYWNlIG9mIGEKICAqICAgICByZWd1 bGFyIHBhZ2UuCkBAIC01MDI5LDggKzUwMjksNyBAQCBzdGF0aWMgZW51bSBtY190YXJnZXRfdHlw ZSBnZXRfbWN0Z3RfdHlwZShzdHJ1Y3Qgdm1fYXJlYV9zdHJ1Y3QgKnZtYSwKIAkJICovCiAJCWlm IChwYWdlLT5tZW1fY2dyb3VwID09IG1jLmZyb20pIHsKIAkJCXJldCA9IE1DX1RBUkdFVF9QQUdF OwotCQkJaWYgKGlzX2RldmljZV9wcml2YXRlX3BhZ2UocGFnZSkgfHwKLQkJCSAgICBpc19kZXZp Y2VfcHVibGljX3BhZ2UocGFnZSkpCisJCQlpZiAoaXNfZGV2aWNlX3ByaXZhdGVfcGFnZShwYWdl KSkKIAkJCQlyZXQgPSBNQ19UQVJHRVRfREVWSUNFOwogCQkJaWYgKHRhcmdldCkKIAkJCQl0YXJn ZXQtPnBhZ2UgPSBwYWdlOwpAQCAtNTEwMSw4ICs1MTAwLDggQEAgc3RhdGljIGludCBtZW1fY2dy b3VwX2NvdW50X3ByZWNoYXJnZV9wdGVfcmFuZ2UocG1kX3QgKnBtZCwKIAlpZiAocHRsKSB7CiAJ CS8qCiAJCSAqIE5vdGUgdGhlaXIgY2FuIG5vdCBiZSBNQ19UQVJHRVRfREVWSUNFIGZvciBub3cg YXMgd2UgZG8gbm90Ci0JCSAqIHN1cHBvcnQgdHJhbnNwYXJlbnQgaHVnZSBwYWdlIHdpdGggTUVN T1JZX0RFVklDRV9QVUJMSUMgb3IKLQkJICogTUVNT1JZX0RFVklDRV9QUklWQVRFIGJ1dCB0aGlz IG1pZ2h0IGNoYW5nZS4KKwkJICogc3VwcG9ydCB0cmFuc3BhcmVudCBodWdlIHBhZ2Ugd2l0aCBN RU1PUllfREVWSUNFX1BSSVZBVEUgYnV0CisJCSAqIHRoaXMgbWlnaHQgY2hhbmdlLgogCQkgKi8K IAkJaWYgKGdldF9tY3RndF90eXBlX3RocCh2bWEsIGFkZHIsICpwbWQsIE5VTEwpID09IE1DX1RB UkdFVF9QQUdFKQogCQkJbWMucHJlY2hhcmdlICs9IEhQQUdFX1BNRF9OUjsKZGlmZiAtLWdpdCBh L21tL21lbW9yeS1mYWlsdXJlLmMgYi9tbS9tZW1vcnktZmFpbHVyZS5jCmluZGV4IDhkYTAzMzRi OWNhMC4uZDlmYzFhOGJkZjZhIDEwMDY0NAotLS0gYS9tbS9tZW1vcnktZmFpbHVyZS5jCisrKyBi L21tL21lbW9yeS1mYWlsdXJlLmMKQEAgLTExNzcsMTYgKzExNzcsMTIgQEAgc3RhdGljIGludCBt ZW1vcnlfZmFpbHVyZV9kZXZfcGFnZW1hcCh1bnNpZ25lZCBsb25nIHBmbiwgaW50IGZsYWdzLAog CQlnb3RvIHVubG9jazsKIAl9CiAKLQlzd2l0Y2ggKHBnbWFwLT50eXBlKSB7Ci0JY2FzZSBNRU1P UllfREVWSUNFX1BSSVZBVEU6Ci0JY2FzZSBNRU1PUllfREVWSUNFX1BVQkxJQzoKKwlpZiAocGdt YXAtPnR5cGUgPT0gTUVNT1JZX0RFVklDRV9QUklWQVRFKSB7CiAJCS8qCiAJCSAqIFRPRE86IEhh bmRsZSBITU0gcGFnZXMgd2hpY2ggbWF5IG5lZWQgY29vcmRpbmF0aW9uCiAJCSAqIHdpdGggZGV2 aWNlLXNpZGUgbWVtb3J5LgogCQkgKi8KIAkJZ290byB1bmxvY2s7Ci0JZGVmYXVsdDoKLQkJYnJl YWs7CiAJfQogCiAJLyoKZGlmZiAtLWdpdCBhL21tL21lbW9yeS5jIGIvbW0vbWVtb3J5LmMKaW5k ZXggZGRmMjBiZDBjMzE3Li5iZDIxZTcwNjNiZjAgMTAwNjQ0Ci0tLSBhL21tL21lbW9yeS5jCisr KyBiL21tL21lbW9yeS5jCkBAIC01ODUsMjkgKzU4NSw2IEBAIHN0cnVjdCBwYWdlICpfdm1fbm9y bWFsX3BhZ2Uoc3RydWN0IHZtX2FyZWFfc3RydWN0ICp2bWEsIHVuc2lnbmVkIGxvbmcgYWRkciwK IAkJCXJldHVybiBOVUxMOwogCQlpZiAoaXNfemVyb19wZm4ocGZuKSkKIAkJCXJldHVybiBOVUxM OwotCi0JCS8qCi0JCSAqIERldmljZSBwdWJsaWMgcGFnZXMgYXJlIHNwZWNpYWwgcGFnZXMgKHRo ZXkgYXJlIFpPTkVfREVWSUNFCi0JCSAqIHBhZ2VzIGJ1dCBkaWZmZXJlbnQgZnJvbSBwZXJzaXN0 ZW50IG1lbW9yeSkuIFRoZXkgYmVoYXZlCi0JCSAqIGFsbG1vc3QgbGlrZSBub3JtYWwgcGFnZXMu IFRoZSBkaWZmZXJlbmNlIGlzIHRoYXQgdGhleSBhcmUKLQkJICogbm90IG9uIHRoZSBscnUgYW5k IHRodXMgc2hvdWxkIG5ldmVyIGJlIGludm9sdmUgd2l0aCBhbnktCi0JCSAqIHRoaW5nIHRoYXQg aW52b2x2ZSBscnUgbWFuaXB1bGF0aW9uIChtbG9jaywgbnVtYSBiYWxhbmNpbmcsCi0JCSAqIC4u LikuCi0JCSAqCi0JCSAqIFRoaXMgaXMgd2h5IHdlIHN0aWxsIHdhbnQgdG8gcmV0dXJuIE5VTEwg Zm9yIHN1Y2ggcGFnZSBmcm9tCi0JCSAqIHZtX25vcm1hbF9wYWdlKCkgc28gdGhhdCB3ZSBkbyBu b3QgaGF2ZSB0byBzcGVjaWFsIGNhc2UgYWxsCi0JCSAqIGNhbGwgc2l0ZSBvZiB2bV9ub3JtYWxf cGFnZSgpLgotCQkgKi8KLQkJaWYgKGxpa2VseShwZm4gPD0gaGlnaGVzdF9tZW1tYXBfcGZuKSkg ewotCQkJc3RydWN0IHBhZ2UgKnBhZ2UgPSBwZm5fdG9fcGFnZShwZm4pOwotCi0JCQlpZiAoaXNf ZGV2aWNlX3B1YmxpY19wYWdlKHBhZ2UpKSB7Ci0JCQkJaWYgKHdpdGhfcHVibGljX2RldmljZSkK LQkJCQkJcmV0dXJuIHBhZ2U7Ci0JCQkJcmV0dXJuIE5VTEw7Ci0JCQl9Ci0JCX0KLQogCQlpZiAo cHRlX2Rldm1hcChwdGUpKQogCQkJcmV0dXJuIE5VTEw7CiAKQEAgLTc5NywxNyArNzc0LDYgQEAg Y29weV9vbmVfcHRlKHN0cnVjdCBtbV9zdHJ1Y3QgKmRzdF9tbSwgc3RydWN0IG1tX3N0cnVjdCAq c3JjX21tLAogCQlyc3NbbW1fY291bnRlcihwYWdlKV0rKzsKIAl9IGVsc2UgaWYgKHB0ZV9kZXZt YXAocHRlKSkgewogCQlwYWdlID0gcHRlX3BhZ2UocHRlKTsKLQotCQkvKgotCQkgKiBDYWNoZSBj b2hlcmVudCBkZXZpY2UgbWVtb3J5IGJlaGF2ZSBsaWtlIHJlZ3VsYXIgcGFnZSBhbmQKLQkJICog bm90IGxpa2UgcGVyc2lzdGVudCBtZW1vcnkgcGFnZS4gRm9yIG1vcmUgaW5mb3JtYXRpb25zIHNl ZQotCQkgKiBNRU1PUllfREVWSUNFX0NBQ0hFX0NPSEVSRU5UIGluIG1lbW9yeV9ob3RwbHVnLmgK LQkJICovCi0JCWlmIChpc19kZXZpY2VfcHVibGljX3BhZ2UocGFnZSkpIHsKLQkJCWdldF9wYWdl KHBhZ2UpOwotCQkJcGFnZV9kdXBfcm1hcChwYWdlLCBmYWxzZSk7Ci0JCQlyc3NbbW1fY291bnRl cihwYWdlKV0rKzsKLQkJfQogCX0KIAogb3V0X3NldF9wdGU6CmRpZmYgLS1naXQgYS9tbS9taWdy YXRlLmMgYi9tbS9taWdyYXRlLmMKaW5kZXggZjJlY2MyODU1YTEyLi4xNDljNjkyZDVmOWIgMTAw NjQ0Ci0tLSBhL21tL21pZ3JhdGUuYworKysgYi9tbS9taWdyYXRlLmMKQEAgLTI0Niw4ICsyNDYs NiBAQCBzdGF0aWMgYm9vbCByZW1vdmVfbWlncmF0aW9uX3B0ZShzdHJ1Y3QgcGFnZSAqcGFnZSwg c3RydWN0IHZtX2FyZWFfc3RydWN0ICp2bWEsCiAJCQlpZiAoaXNfZGV2aWNlX3ByaXZhdGVfcGFn ZShuZXcpKSB7CiAJCQkJZW50cnkgPSBtYWtlX2RldmljZV9wcml2YXRlX2VudHJ5KG5ldywgcHRl X3dyaXRlKHB0ZSkpOwogCQkJCXB0ZSA9IHN3cF9lbnRyeV90b19wdGUoZW50cnkpOwotCQkJfSBl bHNlIGlmIChpc19kZXZpY2VfcHVibGljX3BhZ2UobmV3KSkgewotCQkJCXB0ZSA9IHB0ZV9ta2Rl dm1hcChwdGUpOwogCQkJfQogCQl9CiAKQEAgLTM4MSw3ICszNzksNiBAQCBzdGF0aWMgaW50IGV4 cGVjdGVkX3BhZ2VfcmVmcyhzdHJ1Y3QgYWRkcmVzc19zcGFjZSAqbWFwcGluZywgc3RydWN0IHBh Z2UgKnBhZ2UpCiAJICogWk9ORV9ERVZJQ0UgcGFnZXMuCiAJICovCiAJZXhwZWN0ZWRfY291bnQg Kz0gaXNfZGV2aWNlX3ByaXZhdGVfcGFnZShwYWdlKTsKLQlleHBlY3RlZF9jb3VudCArPSBpc19k ZXZpY2VfcHVibGljX3BhZ2UocGFnZSk7CiAJaWYgKG1hcHBpbmcpCiAJCWV4cGVjdGVkX2NvdW50 ICs9IGhwYWdlX25yX3BhZ2VzKHBhZ2UpICsgcGFnZV9oYXNfcHJpdmF0ZShwYWdlKTsKIApAQCAt OTk0LDEwICs5OTEsNyBAQCBzdGF0aWMgaW50IG1vdmVfdG9fbmV3X3BhZ2Uoc3RydWN0IHBhZ2Ug Km5ld3BhZ2UsIHN0cnVjdCBwYWdlICpwYWdlLAogCQlpZiAoIVBhZ2VNYXBwaW5nRmxhZ3MocGFn ZSkpCiAJCQlwYWdlLT5tYXBwaW5nID0gTlVMTDsKIAotCQlpZiAodW5saWtlbHkoaXNfem9uZV9k ZXZpY2VfcGFnZShuZXdwYWdlKSkpIHsKLQkJCWlmIChpc19kZXZpY2VfcHVibGljX3BhZ2UobmV3 cGFnZSkpCi0JCQkJZmx1c2hfZGNhY2hlX3BhZ2UobmV3cGFnZSk7Ci0JCX0gZWxzZQorCQlpZiAo bGlrZWx5KCFpc196b25lX2RldmljZV9wYWdlKG5ld3BhZ2UpKSkKIAkJCWZsdXNoX2RjYWNoZV9w YWdlKG5ld3BhZ2UpOwogCiAJfQpAQCAtMjQwNiwxNiArMjQwMCw3IEBAIHN0YXRpYyBib29sIG1p Z3JhdGVfdm1hX2NoZWNrX3BhZ2Uoc3RydWN0IHBhZ2UgKnBhZ2UpCiAJCSAqIEZJWE1FIHByb3Bl ciBzb2x1dGlvbiBpcyB0byByZXdvcmsgbWlncmF0aW9uX2VudHJ5X3dhaXQoKSBzbwogCQkgKiBp dCBkb2VzIG5vdCBuZWVkIHRvIHRha2UgYSByZWZlcmVuY2Ugb24gcGFnZS4KIAkJICovCi0JCWlm IChpc19kZXZpY2VfcHJpdmF0ZV9wYWdlKHBhZ2UpKQotCQkJcmV0dXJuIHRydWU7Ci0KLQkJLyoK LQkJICogT25seSBhbGxvdyBkZXZpY2UgcHVibGljIHBhZ2UgdG8gYmUgbWlncmF0ZWQgYW5kIGFj Y291bnQgZm9yCi0JCSAqIHRoZSBleHRyYSByZWZlcmVuY2UgY291bnQgaW1wbHkgYnkgWk9ORV9E RVZJQ0UgcGFnZXMuCi0JCSAqLwotCQlpZiAoIWlzX2RldmljZV9wdWJsaWNfcGFnZShwYWdlKSkK LQkJCXJldHVybiBmYWxzZTsKLQkJZXh0cmErKzsKKwkJcmV0dXJuIGlzX2RldmljZV9wcml2YXRl X3BhZ2UocGFnZSk7CiAJfQogCiAJLyogRm9yIGZpbGUgYmFjayBwYWdlICovCkBAIC0yNjY1LDEx ICsyNjUwLDYgQEAgc3RhdGljIHZvaWQgbWlncmF0ZV92bWFfaW5zZXJ0X3BhZ2Uoc3RydWN0IG1p Z3JhdGVfdm1hICptaWdyYXRlLAogCiAJCQlzd3BfZW50cnkgPSBtYWtlX2RldmljZV9wcml2YXRl X2VudHJ5KHBhZ2UsIHZtYS0+dm1fZmxhZ3MgJiBWTV9XUklURSk7CiAJCQllbnRyeSA9IHN3cF9l bnRyeV90b19wdGUoc3dwX2VudHJ5KTsKLQkJfSBlbHNlIGlmIChpc19kZXZpY2VfcHVibGljX3Bh Z2UocGFnZSkpIHsKLQkJCWVudHJ5ID0gcHRlX21rb2xkKG1rX3B0ZShwYWdlLCBSRUFEX09OQ0Uo dm1hLT52bV9wYWdlX3Byb3QpKSk7Ci0JCQlpZiAodm1hLT52bV9mbGFncyAmIFZNX1dSSVRFKQot CQkJCWVudHJ5ID0gcHRlX21rd3JpdGUocHRlX21rZGlydHkoZW50cnkpKTsKLQkJCWVudHJ5ID0g cHRlX21rZGV2bWFwKGVudHJ5KTsKIAkJfQogCX0gZWxzZSB7CiAJCWVudHJ5ID0gbWtfcHRlKHBh Z2UsIHZtYS0+dm1fcGFnZV9wcm90KTsKQEAgLTI3ODksNyArMjc2OSw3IEBAIHN0YXRpYyB2b2lk IG1pZ3JhdGVfdm1hX3BhZ2VzKHN0cnVjdCBtaWdyYXRlX3ZtYSAqbWlncmF0ZSkKIAkJCQkJbWln cmF0ZS0+c3JjW2ldICY9IH5NSUdSQVRFX1BGTl9NSUdSQVRFOwogCQkJCQljb250aW51ZTsKIAkJ CQl9Ci0JCQl9IGVsc2UgaWYgKCFpc19kZXZpY2VfcHVibGljX3BhZ2UobmV3cGFnZSkpIHsKKwkJ CX0gZWxzZSB7CiAJCQkJLyoKIAkJCQkgKiBPdGhlciB0eXBlcyBvZiBaT05FX0RFVklDRSBwYWdl IGFyZSBub3QKIAkJCQkgKiBzdXBwb3J0ZWQuCmRpZmYgLS1naXQgYS9tbS9zd2FwLmMgYi9tbS9z d2FwLmMKaW5kZXggN2VkZTNlZGRjMTJhLi44MzEwNzQxMGQyOWYgMTAwNjQ0Ci0tLSBhL21tL3N3 YXAuYworKysgYi9tbS9zd2FwLmMKQEAgLTc0MCwxNyArNzQwLDYgQEAgdm9pZCByZWxlYXNlX3Bh Z2VzKHN0cnVjdCBwYWdlICoqcGFnZXMsIGludCBucikKIAkJaWYgKGlzX2h1Z2VfemVyb19wYWdl KHBhZ2UpKQogCQkJY29udGludWU7CiAKLQkJLyogRGV2aWNlIHB1YmxpYyBwYWdlIGNhbiBub3Qg YmUgaHVnZSBwYWdlICovCi0JCWlmIChpc19kZXZpY2VfcHVibGljX3BhZ2UocGFnZSkpIHsKLQkJ CWlmIChsb2NrZWRfcGdkYXQpIHsKLQkJCQlzcGluX3VubG9ja19pcnFyZXN0b3JlKCZsb2NrZWRf cGdkYXQtPmxydV9sb2NrLAotCQkJCQkJICAgICAgIGZsYWdzKTsKLQkJCQlsb2NrZWRfcGdkYXQg PSBOVUxMOwotCQkJfQotCQkJcHV0X2Rldm1hcF9tYW5hZ2VkX3BhZ2UocGFnZSk7Ci0JCQljb250 aW51ZTsKLQkJfQotCiAJCXBhZ2UgPSBjb21wb3VuZF9oZWFkKHBhZ2UpOwogCQlpZiAoIXB1dF9w YWdlX3Rlc3R6ZXJvKHBhZ2UpKQogCQkJY29udGludWU7Ci0tIAoyLjIwLjEKCl9fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCk5vdXZlYXUgbWFpbGluZyBsaXN0 Ck5vdXZlYXVAbGlzdHMuZnJlZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlzdHMuZnJlZWRlc2t0b3Au b3JnL21haWxtYW4vbGlzdGluZm8vbm91dmVhdQ==