From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 206342129DBAD for ; Mon, 17 Jun 2019 05:28:29 -0700 (PDT) From: Christoph Hellwig Subject: [PATCH 20/25] mm: remove hmm_devmem_add Date: Mon, 17 Jun 2019 14:27:28 +0200 Message-Id: <20190617122733.22432-21-hch@lst.de> In-Reply-To: <20190617122733.22432-1-hch@lst.de> References: <20190617122733.22432-1-hch@lst.de> MIME-Version: 1.0 List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" To: Dan Williams , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Jason Gunthorpe , Ben Skeggs Cc: linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, nouveau@lists.freedesktop.org List-ID: There isn't really much value add in the hmm_devmem_add wrapper and more, as using devm_memremap_pages directly now is just as simple. Signed-off-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe --- Documentation/vm/hmm.rst | 26 -------- include/linux/hmm.h | 129 --------------------------------------- mm/hmm.c | 110 --------------------------------- 3 files changed, 265 deletions(-) diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst index 7b6eeda5a7c0..b1c960fe246d 100644 --- a/Documentation/vm/hmm.rst +++ b/Documentation/vm/hmm.rst @@ -336,32 +336,6 @@ directly using struct page for device memory which left most kernel code paths unaware of the difference. We only need to make sure that no one ever tries to map those pages from the CPU side. -HMM provides a set of helpers to register and hotplug device memory as a new -region needing a struct page. This is offered through a very simple API:: - - struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, - struct device *device, - unsigned long size); - void hmm_devmem_remove(struct hmm_devmem *devmem); - -The hmm_devmem_ops is where most of the important things are:: - - struct hmm_devmem_ops { - void (*free)(struct hmm_devmem *devmem, struct page *page); - vm_fault_t (*fault)(struct hmm_devmem *devmem, - struct vm_area_struct *vma, - unsigned long addr, - struct page *page, - unsigned flags, - pmd_t *pmdp); - }; - -The first callback (free()) happens when the last reference on a device page is -dropped. This means the device page is now free and no longer used by anyone. -The second callback happens whenever the CPU tries to access a device page -which it cannot do. This second callback must trigger a migration back to -system memory. - Migration to and from device memory =================================== diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 89571e8d9c63..50ef29958604 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -587,135 +587,6 @@ static inline void hmm_mm_init(struct mm_struct *mm) {} #endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */ #if IS_ENABLED(CONFIG_DEVICE_PRIVATE) || IS_ENABLED(CONFIG_DEVICE_PUBLIC) -struct hmm_devmem; - -/* - * struct hmm_devmem_ops - callback for ZONE_DEVICE memory events - * - * @free: call when refcount on page reach 1 and thus is no longer use - * @fault: call when there is a page fault to unaddressable memory - * - * Both callback happens from page_free() and page_fault() callback of struct - * dev_pagemap respectively. See include/linux/memremap.h for more details on - * those. - * - * The hmm_devmem_ops callback are just here to provide a coherent and - * uniq API to device driver and device driver should not register their - * own page_free() or page_fault() but rely on the hmm_devmem_ops call- - * back. - */ -struct hmm_devmem_ops { - /* - * free() - free a device page - * @devmem: device memory structure (see struct hmm_devmem) - * @page: pointer to struct page being freed - * - * Call back occurs whenever a device page refcount reach 1 which - * means that no one is holding any reference on the page anymore - * (ZONE_DEVICE page have an elevated refcount of 1 as default so - * that they are not release to the general page allocator). - * - * Note that callback has exclusive ownership of the page (as no - * one is holding any reference). - */ - void (*free)(struct hmm_devmem *devmem, struct page *page); - /* - * fault() - CPU page fault or get user page (GUP) - * @devmem: device memory structure (see struct hmm_devmem) - * @vma: virtual memory area containing the virtual address - * @addr: virtual address that faulted or for which there is a GUP - * @page: pointer to struct page backing virtual address (unreliable) - * @flags: FAULT_FLAG_* (see include/linux/mm.h) - * @pmdp: page middle directory - * Return: VM_FAULT_MINOR/MAJOR on success or one of VM_FAULT_ERROR - * on error - * - * The callback occurs whenever there is a CPU page fault or GUP on a - * virtual address. This means that the device driver must migrate the - * page back to regular memory (CPU accessible). - * - * The device driver is free to migrate more than one page from the - * fault() callback as an optimization. However if the device decides - * to migrate more than one page it must always priotirize the faulting - * address over the others. - * - * The struct page pointer is only given as a hint to allow quick - * lookup of internal device driver data. A concurrent migration - * might have already freed that page and the virtual address might - * no longer be backed by it. So it should not be modified by the - * callback. - * - * Note that mmap semaphore is held in read mode at least when this - * callback occurs, hence the vma is valid upon callback entry. - */ - vm_fault_t (*fault)(struct hmm_devmem *devmem, - struct vm_area_struct *vma, - unsigned long addr, - const struct page *page, - unsigned int flags, - pmd_t *pmdp); -}; - -/* - * struct hmm_devmem - track device memory - * - * @completion: completion object for device memory - * @pfn_first: first pfn for this resource (set by hmm_devmem_add()) - * @pfn_last: last pfn for this resource (set by hmm_devmem_add()) - * @resource: IO resource reserved for this chunk of memory - * @pagemap: device page map for that chunk - * @device: device to bind resource to - * @ops: memory operations callback - * @ref: per CPU refcount - * @page_fault: callback when CPU fault on an unaddressable device page - * - * This is a helper structure for device drivers that do not wish to implement - * the gory details related to hotplugging new memoy and allocating struct - * pages. - * - * Device drivers can directly use ZONE_DEVICE memory on their own if they - * wish to do so. - * - * The page_fault() callback must migrate page back, from device memory to - * system memory, so that the CPU can access it. This might fail for various - * reasons (device issues, device have been unplugged, ...). When such error - * conditions happen, the page_fault() callback must return VM_FAULT_SIGBUS and - * set the CPU page table entry to "poisoned". - * - * Note that because memory cgroup charges are transferred to the device memory, - * this should never fail due to memory restrictions. However, allocation - * of a regular system page might still fail because we are out of memory. If - * that happens, the page_fault() callback must return VM_FAULT_OOM. - * - * The page_fault() callback can also try to migrate back multiple pages in one - * chunk, as an optimization. It must, however, prioritize the faulting address - * over all the others. - */ - -struct hmm_devmem { - struct completion completion; - unsigned long pfn_first; - unsigned long pfn_last; - struct resource *resource; - struct device *device; - struct dev_pagemap pagemap; - const struct hmm_devmem_ops *ops; - struct percpu_ref ref; -}; - -/* - * To add (hotplug) device memory, HMM assumes that there is no real resource - * that reserves a range in the physical address space (this is intended to be - * use by unaddressable device memory). It will reserve a physical range big - * enough and allocate struct page for it. - * - * The device driver can wrap the hmm_devmem struct inside a private device - * driver struct. - */ -struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, - struct device *device, - unsigned long size); - /* * hmm_devmem_page_set_drvdata - set per-page driver data field * diff --git a/mm/hmm.c b/mm/hmm.c index 0ef1a1921afb..17ed080d9c32 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -1324,113 +1324,3 @@ long hmm_range_dma_unmap(struct hmm_range *range, } EXPORT_SYMBOL(hmm_range_dma_unmap); #endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */ - - -#if IS_ENABLED(CONFIG_DEVICE_PRIVATE) || IS_ENABLED(CONFIG_DEVICE_PUBLIC) -static void hmm_devmem_ref_release(struct percpu_ref *ref) -{ - struct hmm_devmem *devmem; - - devmem = container_of(ref, struct hmm_devmem, ref); - complete(&devmem->completion); -} - -static void hmm_devmem_ref_exit(struct dev_pagemap *pgmap) -{ - struct hmm_devmem *devmem; - - devmem = container_of(pgmap, struct hmm_devmem, pagemap); - wait_for_completion(&devmem->completion); - percpu_ref_exit(pgmap->ref); -} - -static void hmm_devmem_ref_kill(struct dev_pagemap *pgmap) -{ - percpu_ref_kill(pgmap->ref); -} - -static vm_fault_t hmm_devmem_migrate_to_ram(struct vm_fault *vmf) -{ - struct hmm_devmem *devmem = - container_of(vmf->page->pgmap, struct hmm_devmem, pagemap); - - return devmem->ops->fault(devmem, vmf->vma, vmf->address, vmf->page, - vmf->flags, vmf->pmd); -} - -static void hmm_devmem_free(struct page *page) -{ - struct hmm_devmem *devmem = - container_of(page->pgmap, struct hmm_devmem, pagemap); - - devmem->ops->free(devmem, page); -} - -static const struct dev_pagemap_ops hmm_pagemap_ops = { - .page_free = hmm_devmem_free, - .kill = hmm_devmem_ref_kill, - .cleanup = hmm_devmem_ref_exit, - .migrate_to_ram = hmm_devmem_migrate_to_ram, -}; - -/* - * hmm_devmem_add() - hotplug ZONE_DEVICE memory for device memory - * - * @ops: memory event device driver callback (see struct hmm_devmem_ops) - * @device: device struct to bind the resource too - * @size: size in bytes of the device memory to add - * Return: pointer to new hmm_devmem struct ERR_PTR otherwise - * - * This function first finds an empty range of physical address big enough to - * contain the new resource, and then hotplugs it as ZONE_DEVICE memory, which - * in turn allocates struct pages. It does not do anything beyond that; all - * events affecting the memory will go through the various callbacks provided - * by hmm_devmem_ops struct. - * - * Device driver should call this function during device initialization and - * is then responsible of memory management. HMM only provides helpers. - */ -struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, - struct device *device, - unsigned long size) -{ - struct hmm_devmem *devmem; - void *result; - int ret; - - devmem = devm_kzalloc(device, sizeof(*devmem), GFP_KERNEL); - if (!devmem) - return ERR_PTR(-ENOMEM); - - init_completion(&devmem->completion); - devmem->pfn_first = -1UL; - devmem->pfn_last = -1UL; - devmem->resource = NULL; - devmem->device = device; - devmem->ops = ops; - - ret = percpu_ref_init(&devmem->ref, &hmm_devmem_ref_release, - 0, GFP_KERNEL); - if (ret) - return ERR_PTR(ret); - - devmem->resource = devm_request_free_mem_region(device, &iomem_resource, - size); - if (IS_ERR(devmem->resource)) - return ERR_CAST(devmem->resource); - devmem->pfn_first = devmem->resource->start >> PAGE_SHIFT; - devmem->pfn_last = devmem->pfn_first + - (resource_size(devmem->resource) >> PAGE_SHIFT); - - devmem->pagemap.type = MEMORY_DEVICE_PRIVATE; - devmem->pagemap.res = *devmem->resource; - devmem->pagemap.ops = &hmm_pagemap_ops; - devmem->pagemap.ref = &devmem->ref; - - result = devm_memremap_pages(devmem->device, &devmem->pagemap); - if (IS_ERR(result)) - return result; - return devmem; -} -EXPORT_SYMBOL_GPL(hmm_devmem_add); -#endif /* CONFIG_DEVICE_PRIVATE || CONFIG_DEVICE_PUBLIC */ -- 2.20.1 _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F131EC31E59 for ; Mon, 17 Jun 2019 12:28:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BFD8B2084D for ; Mon, 17 Jun 2019 12:28:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ikcpIHXB" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728235AbfFQM2b (ORCPT ); Mon, 17 Jun 2019 08:28:31 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:43996 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728149AbfFQM23 (ORCPT ); Mon, 17 Jun 2019 08:28:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=p1ZrEtGR2yZrdK074qYGx4skRkZcef7n/LfOoR6uVVM=; b=ikcpIHXBPSyetnmjaGo2E9ZIsW 47HEbyazOAyqWhsgjHN6tpxC4KMjCz7r7jdtURQnDY/ZTB2t1kgJMv2x03ocveBfX+ORj7e8sZA6R ieSeEYAqaDfExWXGSnzeMI0yTrExKrE0fMJPOAw/b4ddlk2ssmwGQ/BucqJEhR4PbovdTLprqbWip ca3eKxXEyvES9BB5GD1sh9R3VksCOisfE5Vq3HDM8SL2TPRBHQeVfuArHveID8tC06UO/2Joj+ZEo aD9QhETPJxx7sQj4d3rNLB3MJlhCc+8k0Sc55VlUja1/YodNjUQHHp7AE12rvOYXozFN7osSinvis bb2gYvew==; Received: from clnet-p19-102.ikbnet.co.at ([83.175.77.102] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hcqkH-0000Nl-Ia; Mon, 17 Jun 2019 12:28:21 +0000 From: Christoph Hellwig To: Dan Williams , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Jason Gunthorpe , Ben Skeggs Cc: linux-mm@kvack.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 20/25] mm: remove hmm_devmem_add Date: Mon, 17 Jun 2019 14:27:28 +0200 Message-Id: <20190617122733.22432-21-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190617122733.22432-1-hch@lst.de> References: <20190617122733.22432-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There isn't really much value add in the hmm_devmem_add wrapper and more, as using devm_memremap_pages directly now is just as simple. Signed-off-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe --- Documentation/vm/hmm.rst | 26 -------- include/linux/hmm.h | 129 --------------------------------------- mm/hmm.c | 110 --------------------------------- 3 files changed, 265 deletions(-) diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst index 7b6eeda5a7c0..b1c960fe246d 100644 --- a/Documentation/vm/hmm.rst +++ b/Documentation/vm/hmm.rst @@ -336,32 +336,6 @@ directly using struct page for device memory which left most kernel code paths unaware of the difference. We only need to make sure that no one ever tries to map those pages from the CPU side. -HMM provides a set of helpers to register and hotplug device memory as a new -region needing a struct page. This is offered through a very simple API:: - - struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, - struct device *device, - unsigned long size); - void hmm_devmem_remove(struct hmm_devmem *devmem); - -The hmm_devmem_ops is where most of the important things are:: - - struct hmm_devmem_ops { - void (*free)(struct hmm_devmem *devmem, struct page *page); - vm_fault_t (*fault)(struct hmm_devmem *devmem, - struct vm_area_struct *vma, - unsigned long addr, - struct page *page, - unsigned flags, - pmd_t *pmdp); - }; - -The first callback (free()) happens when the last reference on a device page is -dropped. This means the device page is now free and no longer used by anyone. -The second callback happens whenever the CPU tries to access a device page -which it cannot do. This second callback must trigger a migration back to -system memory. - Migration to and from device memory =================================== diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 89571e8d9c63..50ef29958604 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -587,135 +587,6 @@ static inline void hmm_mm_init(struct mm_struct *mm) {} #endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */ #if IS_ENABLED(CONFIG_DEVICE_PRIVATE) || IS_ENABLED(CONFIG_DEVICE_PUBLIC) -struct hmm_devmem; - -/* - * struct hmm_devmem_ops - callback for ZONE_DEVICE memory events - * - * @free: call when refcount on page reach 1 and thus is no longer use - * @fault: call when there is a page fault to unaddressable memory - * - * Both callback happens from page_free() and page_fault() callback of struct - * dev_pagemap respectively. See include/linux/memremap.h for more details on - * those. - * - * The hmm_devmem_ops callback are just here to provide a coherent and - * uniq API to device driver and device driver should not register their - * own page_free() or page_fault() but rely on the hmm_devmem_ops call- - * back. - */ -struct hmm_devmem_ops { - /* - * free() - free a device page - * @devmem: device memory structure (see struct hmm_devmem) - * @page: pointer to struct page being freed - * - * Call back occurs whenever a device page refcount reach 1 which - * means that no one is holding any reference on the page anymore - * (ZONE_DEVICE page have an elevated refcount of 1 as default so - * that they are not release to the general page allocator). - * - * Note that callback has exclusive ownership of the page (as no - * one is holding any reference). - */ - void (*free)(struct hmm_devmem *devmem, struct page *page); - /* - * fault() - CPU page fault or get user page (GUP) - * @devmem: device memory structure (see struct hmm_devmem) - * @vma: virtual memory area containing the virtual address - * @addr: virtual address that faulted or for which there is a GUP - * @page: pointer to struct page backing virtual address (unreliable) - * @flags: FAULT_FLAG_* (see include/linux/mm.h) - * @pmdp: page middle directory - * Return: VM_FAULT_MINOR/MAJOR on success or one of VM_FAULT_ERROR - * on error - * - * The callback occurs whenever there is a CPU page fault or GUP on a - * virtual address. This means that the device driver must migrate the - * page back to regular memory (CPU accessible). - * - * The device driver is free to migrate more than one page from the - * fault() callback as an optimization. However if the device decides - * to migrate more than one page it must always priotirize the faulting - * address over the others. - * - * The struct page pointer is only given as a hint to allow quick - * lookup of internal device driver data. A concurrent migration - * might have already freed that page and the virtual address might - * no longer be backed by it. So it should not be modified by the - * callback. - * - * Note that mmap semaphore is held in read mode at least when this - * callback occurs, hence the vma is valid upon callback entry. - */ - vm_fault_t (*fault)(struct hmm_devmem *devmem, - struct vm_area_struct *vma, - unsigned long addr, - const struct page *page, - unsigned int flags, - pmd_t *pmdp); -}; - -/* - * struct hmm_devmem - track device memory - * - * @completion: completion object for device memory - * @pfn_first: first pfn for this resource (set by hmm_devmem_add()) - * @pfn_last: last pfn for this resource (set by hmm_devmem_add()) - * @resource: IO resource reserved for this chunk of memory - * @pagemap: device page map for that chunk - * @device: device to bind resource to - * @ops: memory operations callback - * @ref: per CPU refcount - * @page_fault: callback when CPU fault on an unaddressable device page - * - * This is a helper structure for device drivers that do not wish to implement - * the gory details related to hotplugging new memoy and allocating struct - * pages. - * - * Device drivers can directly use ZONE_DEVICE memory on their own if they - * wish to do so. - * - * The page_fault() callback must migrate page back, from device memory to - * system memory, so that the CPU can access it. This might fail for various - * reasons (device issues, device have been unplugged, ...). When such error - * conditions happen, the page_fault() callback must return VM_FAULT_SIGBUS and - * set the CPU page table entry to "poisoned". - * - * Note that because memory cgroup charges are transferred to the device memory, - * this should never fail due to memory restrictions. However, allocation - * of a regular system page might still fail because we are out of memory. If - * that happens, the page_fault() callback must return VM_FAULT_OOM. - * - * The page_fault() callback can also try to migrate back multiple pages in one - * chunk, as an optimization. It must, however, prioritize the faulting address - * over all the others. - */ - -struct hmm_devmem { - struct completion completion; - unsigned long pfn_first; - unsigned long pfn_last; - struct resource *resource; - struct device *device; - struct dev_pagemap pagemap; - const struct hmm_devmem_ops *ops; - struct percpu_ref ref; -}; - -/* - * To add (hotplug) device memory, HMM assumes that there is no real resource - * that reserves a range in the physical address space (this is intended to be - * use by unaddressable device memory). It will reserve a physical range big - * enough and allocate struct page for it. - * - * The device driver can wrap the hmm_devmem struct inside a private device - * driver struct. - */ -struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, - struct device *device, - unsigned long size); - /* * hmm_devmem_page_set_drvdata - set per-page driver data field * diff --git a/mm/hmm.c b/mm/hmm.c index 0ef1a1921afb..17ed080d9c32 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -1324,113 +1324,3 @@ long hmm_range_dma_unmap(struct hmm_range *range, } EXPORT_SYMBOL(hmm_range_dma_unmap); #endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */ - - -#if IS_ENABLED(CONFIG_DEVICE_PRIVATE) || IS_ENABLED(CONFIG_DEVICE_PUBLIC) -static void hmm_devmem_ref_release(struct percpu_ref *ref) -{ - struct hmm_devmem *devmem; - - devmem = container_of(ref, struct hmm_devmem, ref); - complete(&devmem->completion); -} - -static void hmm_devmem_ref_exit(struct dev_pagemap *pgmap) -{ - struct hmm_devmem *devmem; - - devmem = container_of(pgmap, struct hmm_devmem, pagemap); - wait_for_completion(&devmem->completion); - percpu_ref_exit(pgmap->ref); -} - -static void hmm_devmem_ref_kill(struct dev_pagemap *pgmap) -{ - percpu_ref_kill(pgmap->ref); -} - -static vm_fault_t hmm_devmem_migrate_to_ram(struct vm_fault *vmf) -{ - struct hmm_devmem *devmem = - container_of(vmf->page->pgmap, struct hmm_devmem, pagemap); - - return devmem->ops->fault(devmem, vmf->vma, vmf->address, vmf->page, - vmf->flags, vmf->pmd); -} - -static void hmm_devmem_free(struct page *page) -{ - struct hmm_devmem *devmem = - container_of(page->pgmap, struct hmm_devmem, pagemap); - - devmem->ops->free(devmem, page); -} - -static const struct dev_pagemap_ops hmm_pagemap_ops = { - .page_free = hmm_devmem_free, - .kill = hmm_devmem_ref_kill, - .cleanup = hmm_devmem_ref_exit, - .migrate_to_ram = hmm_devmem_migrate_to_ram, -}; - -/* - * hmm_devmem_add() - hotplug ZONE_DEVICE memory for device memory - * - * @ops: memory event device driver callback (see struct hmm_devmem_ops) - * @device: device struct to bind the resource too - * @size: size in bytes of the device memory to add - * Return: pointer to new hmm_devmem struct ERR_PTR otherwise - * - * This function first finds an empty range of physical address big enough to - * contain the new resource, and then hotplugs it as ZONE_DEVICE memory, which - * in turn allocates struct pages. It does not do anything beyond that; all - * events affecting the memory will go through the various callbacks provided - * by hmm_devmem_ops struct. - * - * Device driver should call this function during device initialization and - * is then responsible of memory management. HMM only provides helpers. - */ -struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, - struct device *device, - unsigned long size) -{ - struct hmm_devmem *devmem; - void *result; - int ret; - - devmem = devm_kzalloc(device, sizeof(*devmem), GFP_KERNEL); - if (!devmem) - return ERR_PTR(-ENOMEM); - - init_completion(&devmem->completion); - devmem->pfn_first = -1UL; - devmem->pfn_last = -1UL; - devmem->resource = NULL; - devmem->device = device; - devmem->ops = ops; - - ret = percpu_ref_init(&devmem->ref, &hmm_devmem_ref_release, - 0, GFP_KERNEL); - if (ret) - return ERR_PTR(ret); - - devmem->resource = devm_request_free_mem_region(device, &iomem_resource, - size); - if (IS_ERR(devmem->resource)) - return ERR_CAST(devmem->resource); - devmem->pfn_first = devmem->resource->start >> PAGE_SHIFT; - devmem->pfn_last = devmem->pfn_first + - (resource_size(devmem->resource) >> PAGE_SHIFT); - - devmem->pagemap.type = MEMORY_DEVICE_PRIVATE; - devmem->pagemap.res = *devmem->resource; - devmem->pagemap.ops = &hmm_pagemap_ops; - devmem->pagemap.ref = &devmem->ref; - - result = devm_memremap_pages(devmem->device, &devmem->pagemap); - if (IS_ERR(result)) - return result; - return devmem; -} -EXPORT_SYMBOL_GPL(hmm_devmem_add); -#endif /* CONFIG_DEVICE_PRIVATE || CONFIG_DEVICE_PUBLIC */ -- 2.20.1 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoph Hellwig Subject: [PATCH 20/25] mm: remove hmm_devmem_add Date: Mon, 17 Jun 2019 14:27:28 +0200 Message-ID: <20190617122733.22432-21-hch@lst.de> References: <20190617122733.22432-1-hch@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: <20190617122733.22432-1-hch-jcswGhMUV9g@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: nouveau-bounces-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org Sender: "Nouveau" To: Dan Williams , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Jason Gunthorpe , Ben Skeggs Cc: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org, linux-pci-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, nouveau-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org List-Id: nouveau.vger.kernel.org VGhlcmUgaXNuJ3QgcmVhbGx5IG11Y2ggdmFsdWUgYWRkIGluIHRoZSBobW1fZGV2bWVtX2FkZCB3 cmFwcGVyIGFuZAptb3JlLCBhcyB1c2luZyBkZXZtX21lbXJlbWFwX3BhZ2VzIGRpcmVjdGx5IG5v dyBpcyBqdXN0IGFzIHNpbXBsZS4KClNpZ25lZC1vZmYtYnk6IENocmlzdG9waCBIZWxsd2lnIDxo Y2hAbHN0LmRlPgpSZXZpZXdlZC1ieTogSmFzb24gR3VudGhvcnBlIDxqZ2dAbWVsbGFub3guY29t PgotLS0KIERvY3VtZW50YXRpb24vdm0vaG1tLnJzdCB8ICAyNiAtLS0tLS0tLQogaW5jbHVkZS9s aW51eC9obW0uaCAgICAgIHwgMTI5IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLQogbW0vaG1tLmMgICAgICAgICAgICAgICAgIHwgMTEwIC0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLQogMyBmaWxlcyBjaGFuZ2VkLCAyNjUgZGVsZXRpb25zKC0pCgpkaWZmIC0t Z2l0IGEvRG9jdW1lbnRhdGlvbi92bS9obW0ucnN0IGIvRG9jdW1lbnRhdGlvbi92bS9obW0ucnN0 CmluZGV4IDdiNmVlZGE1YTdjMC4uYjFjOTYwZmUyNDZkIDEwMDY0NAotLS0gYS9Eb2N1bWVudGF0 aW9uL3ZtL2htbS5yc3QKKysrIGIvRG9jdW1lbnRhdGlvbi92bS9obW0ucnN0CkBAIC0zMzYsMzIg KzMzNiw2IEBAIGRpcmVjdGx5IHVzaW5nIHN0cnVjdCBwYWdlIGZvciBkZXZpY2UgbWVtb3J5IHdo aWNoIGxlZnQgbW9zdCBrZXJuZWwgY29kZSBwYXRocwogdW5hd2FyZSBvZiB0aGUgZGlmZmVyZW5j ZS4gV2Ugb25seSBuZWVkIHRvIG1ha2Ugc3VyZSB0aGF0IG5vIG9uZSBldmVyIHRyaWVzIHRvCiBt YXAgdGhvc2UgcGFnZXMgZnJvbSB0aGUgQ1BVIHNpZGUuCiAKLUhNTSBwcm92aWRlcyBhIHNldCBv ZiBoZWxwZXJzIHRvIHJlZ2lzdGVyIGFuZCBob3RwbHVnIGRldmljZSBtZW1vcnkgYXMgYSBuZXcK LXJlZ2lvbiBuZWVkaW5nIGEgc3RydWN0IHBhZ2UuIFRoaXMgaXMgb2ZmZXJlZCB0aHJvdWdoIGEg dmVyeSBzaW1wbGUgQVBJOjoKLQotIHN0cnVjdCBobW1fZGV2bWVtICpobW1fZGV2bWVtX2FkZChj b25zdCBzdHJ1Y3QgaG1tX2Rldm1lbV9vcHMgKm9wcywKLSAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgc3RydWN0IGRldmljZSAqZGV2aWNlLAotICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICB1bnNpZ25lZCBsb25nIHNpemUpOwotIHZvaWQgaG1tX2Rldm1lbV9yZW1v dmUoc3RydWN0IGhtbV9kZXZtZW0gKmRldm1lbSk7Ci0KLVRoZSBobW1fZGV2bWVtX29wcyBpcyB3 aGVyZSBtb3N0IG9mIHRoZSBpbXBvcnRhbnQgdGhpbmdzIGFyZTo6Ci0KLSBzdHJ1Y3QgaG1tX2Rl dm1lbV9vcHMgewotICAgICB2b2lkICgqZnJlZSkoc3RydWN0IGhtbV9kZXZtZW0gKmRldm1lbSwg c3RydWN0IHBhZ2UgKnBhZ2UpOwotICAgICB2bV9mYXVsdF90ICgqZmF1bHQpKHN0cnVjdCBobW1f ZGV2bWVtICpkZXZtZW0sCi0gICAgICAgICAgICAgICAgICBzdHJ1Y3Qgdm1fYXJlYV9zdHJ1Y3Qg KnZtYSwKLSAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgYWRkciwKLSAgICAgICAgICAg ICAgICAgIHN0cnVjdCBwYWdlICpwYWdlLAotICAgICAgICAgICAgICAgICAgdW5zaWduZWQgZmxh Z3MsCi0gICAgICAgICAgICAgICAgICBwbWRfdCAqcG1kcCk7Ci0gfTsKLQotVGhlIGZpcnN0IGNh bGxiYWNrIChmcmVlKCkpIGhhcHBlbnMgd2hlbiB0aGUgbGFzdCByZWZlcmVuY2Ugb24gYSBkZXZp Y2UgcGFnZSBpcwotZHJvcHBlZC4gVGhpcyBtZWFucyB0aGUgZGV2aWNlIHBhZ2UgaXMgbm93IGZy ZWUgYW5kIG5vIGxvbmdlciB1c2VkIGJ5IGFueW9uZS4KLVRoZSBzZWNvbmQgY2FsbGJhY2sgaGFw cGVucyB3aGVuZXZlciB0aGUgQ1BVIHRyaWVzIHRvIGFjY2VzcyBhIGRldmljZSBwYWdlCi13aGlj aCBpdCBjYW5ub3QgZG8uIFRoaXMgc2Vjb25kIGNhbGxiYWNrIG11c3QgdHJpZ2dlciBhIG1pZ3Jh dGlvbiBiYWNrIHRvCi1zeXN0ZW0gbWVtb3J5LgotCiAKIE1pZ3JhdGlvbiB0byBhbmQgZnJvbSBk ZXZpY2UgbWVtb3J5CiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQpkaWZmIC0t Z2l0IGEvaW5jbHVkZS9saW51eC9obW0uaCBiL2luY2x1ZGUvbGludXgvaG1tLmgKaW5kZXggODk1 NzFlOGQ5YzYzLi41MGVmMjk5NTg2MDQgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUvbGludXgvaG1tLmgK KysrIGIvaW5jbHVkZS9saW51eC9obW0uaApAQCAtNTg3LDEzNSArNTg3LDYgQEAgc3RhdGljIGlu bGluZSB2b2lkIGhtbV9tbV9pbml0KHN0cnVjdCBtbV9zdHJ1Y3QgKm1tKSB7fQogI2VuZGlmIC8q IElTX0VOQUJMRUQoQ09ORklHX0hNTV9NSVJST1IpICovCiAKICNpZiBJU19FTkFCTEVEKENPTkZJ R19ERVZJQ0VfUFJJVkFURSkgfHwgIElTX0VOQUJMRUQoQ09ORklHX0RFVklDRV9QVUJMSUMpCi1z dHJ1Y3QgaG1tX2Rldm1lbTsKLQotLyoKLSAqIHN0cnVjdCBobW1fZGV2bWVtX29wcyAtIGNhbGxi YWNrIGZvciBaT05FX0RFVklDRSBtZW1vcnkgZXZlbnRzCi0gKgotICogQGZyZWU6IGNhbGwgd2hl biByZWZjb3VudCBvbiBwYWdlIHJlYWNoIDEgYW5kIHRodXMgaXMgbm8gbG9uZ2VyIHVzZQotICog QGZhdWx0OiBjYWxsIHdoZW4gdGhlcmUgaXMgYSBwYWdlIGZhdWx0IHRvIHVuYWRkcmVzc2FibGUg bWVtb3J5Ci0gKgotICogQm90aCBjYWxsYmFjayBoYXBwZW5zIGZyb20gcGFnZV9mcmVlKCkgYW5k IHBhZ2VfZmF1bHQoKSBjYWxsYmFjayBvZiBzdHJ1Y3QKLSAqIGRldl9wYWdlbWFwIHJlc3BlY3Rp dmVseS4gU2VlIGluY2x1ZGUvbGludXgvbWVtcmVtYXAuaCBmb3IgbW9yZSBkZXRhaWxzIG9uCi0g KiB0aG9zZS4KLSAqCi0gKiBUaGUgaG1tX2Rldm1lbV9vcHMgY2FsbGJhY2sgYXJlIGp1c3QgaGVy ZSB0byBwcm92aWRlIGEgY29oZXJlbnQgYW5kCi0gKiB1bmlxIEFQSSB0byBkZXZpY2UgZHJpdmVy IGFuZCBkZXZpY2UgZHJpdmVyIHNob3VsZCBub3QgcmVnaXN0ZXIgdGhlaXIKLSAqIG93biBwYWdl X2ZyZWUoKSBvciBwYWdlX2ZhdWx0KCkgYnV0IHJlbHkgb24gdGhlIGhtbV9kZXZtZW1fb3BzIGNh bGwtCi0gKiBiYWNrLgotICovCi1zdHJ1Y3QgaG1tX2Rldm1lbV9vcHMgewotCS8qCi0JICogZnJl ZSgpIC0gZnJlZSBhIGRldmljZSBwYWdlCi0JICogQGRldm1lbTogZGV2aWNlIG1lbW9yeSBzdHJ1 Y3R1cmUgKHNlZSBzdHJ1Y3QgaG1tX2Rldm1lbSkKLQkgKiBAcGFnZTogcG9pbnRlciB0byBzdHJ1 Y3QgcGFnZSBiZWluZyBmcmVlZAotCSAqCi0JICogQ2FsbCBiYWNrIG9jY3VycyB3aGVuZXZlciBh IGRldmljZSBwYWdlIHJlZmNvdW50IHJlYWNoIDEgd2hpY2gKLQkgKiBtZWFucyB0aGF0IG5vIG9u ZSBpcyBob2xkaW5nIGFueSByZWZlcmVuY2Ugb24gdGhlIHBhZ2UgYW55bW9yZQotCSAqIChaT05F X0RFVklDRSBwYWdlIGhhdmUgYW4gZWxldmF0ZWQgcmVmY291bnQgb2YgMSBhcyBkZWZhdWx0IHNv Ci0JICogdGhhdCB0aGV5IGFyZSBub3QgcmVsZWFzZSB0byB0aGUgZ2VuZXJhbCBwYWdlIGFsbG9j YXRvcikuCi0JICoKLQkgKiBOb3RlIHRoYXQgY2FsbGJhY2sgaGFzIGV4Y2x1c2l2ZSBvd25lcnNo aXAgb2YgdGhlIHBhZ2UgKGFzIG5vCi0JICogb25lIGlzIGhvbGRpbmcgYW55IHJlZmVyZW5jZSku Ci0JICovCi0Jdm9pZCAoKmZyZWUpKHN0cnVjdCBobW1fZGV2bWVtICpkZXZtZW0sIHN0cnVjdCBw YWdlICpwYWdlKTsKLQkvKgotCSAqIGZhdWx0KCkgLSBDUFUgcGFnZSBmYXVsdCBvciBnZXQgdXNl ciBwYWdlIChHVVApCi0JICogQGRldm1lbTogZGV2aWNlIG1lbW9yeSBzdHJ1Y3R1cmUgKHNlZSBz dHJ1Y3QgaG1tX2Rldm1lbSkKLQkgKiBAdm1hOiB2aXJ0dWFsIG1lbW9yeSBhcmVhIGNvbnRhaW5p bmcgdGhlIHZpcnR1YWwgYWRkcmVzcwotCSAqIEBhZGRyOiB2aXJ0dWFsIGFkZHJlc3MgdGhhdCBm YXVsdGVkIG9yIGZvciB3aGljaCB0aGVyZSBpcyBhIEdVUAotCSAqIEBwYWdlOiBwb2ludGVyIHRv IHN0cnVjdCBwYWdlIGJhY2tpbmcgdmlydHVhbCBhZGRyZXNzICh1bnJlbGlhYmxlKQotCSAqIEBm bGFnczogRkFVTFRfRkxBR18qIChzZWUgaW5jbHVkZS9saW51eC9tbS5oKQotCSAqIEBwbWRwOiBw YWdlIG1pZGRsZSBkaXJlY3RvcnkKLQkgKiBSZXR1cm46IFZNX0ZBVUxUX01JTk9SL01BSk9SIG9u IHN1Y2Nlc3Mgb3Igb25lIG9mIFZNX0ZBVUxUX0VSUk9SCi0JICogICBvbiBlcnJvcgotCSAqCi0J ICogVGhlIGNhbGxiYWNrIG9jY3VycyB3aGVuZXZlciB0aGVyZSBpcyBhIENQVSBwYWdlIGZhdWx0 IG9yIEdVUCBvbiBhCi0JICogdmlydHVhbCBhZGRyZXNzLiBUaGlzIG1lYW5zIHRoYXQgdGhlIGRl dmljZSBkcml2ZXIgbXVzdCBtaWdyYXRlIHRoZQotCSAqIHBhZ2UgYmFjayB0byByZWd1bGFyIG1l bW9yeSAoQ1BVIGFjY2Vzc2libGUpLgotCSAqCi0JICogVGhlIGRldmljZSBkcml2ZXIgaXMgZnJl ZSB0byBtaWdyYXRlIG1vcmUgdGhhbiBvbmUgcGFnZSBmcm9tIHRoZQotCSAqIGZhdWx0KCkgY2Fs bGJhY2sgYXMgYW4gb3B0aW1pemF0aW9uLiBIb3dldmVyIGlmIHRoZSBkZXZpY2UgZGVjaWRlcwot CSAqIHRvIG1pZ3JhdGUgbW9yZSB0aGFuIG9uZSBwYWdlIGl0IG11c3QgYWx3YXlzIHByaW90aXJp emUgdGhlIGZhdWx0aW5nCi0JICogYWRkcmVzcyBvdmVyIHRoZSBvdGhlcnMuCi0JICoKLQkgKiBU aGUgc3RydWN0IHBhZ2UgcG9pbnRlciBpcyBvbmx5IGdpdmVuIGFzIGEgaGludCB0byBhbGxvdyBx dWljawotCSAqIGxvb2t1cCBvZiBpbnRlcm5hbCBkZXZpY2UgZHJpdmVyIGRhdGEuIEEgY29uY3Vy cmVudCBtaWdyYXRpb24KLQkgKiBtaWdodCBoYXZlIGFscmVhZHkgZnJlZWQgdGhhdCBwYWdlIGFu ZCB0aGUgdmlydHVhbCBhZGRyZXNzIG1pZ2h0Ci0JICogbm8gbG9uZ2VyIGJlIGJhY2tlZCBieSBp dC4gU28gaXQgc2hvdWxkIG5vdCBiZSBtb2RpZmllZCBieSB0aGUKLQkgKiBjYWxsYmFjay4KLQkg KgotCSAqIE5vdGUgdGhhdCBtbWFwIHNlbWFwaG9yZSBpcyBoZWxkIGluIHJlYWQgbW9kZSBhdCBs ZWFzdCB3aGVuIHRoaXMKLQkgKiBjYWxsYmFjayBvY2N1cnMsIGhlbmNlIHRoZSB2bWEgaXMgdmFs aWQgdXBvbiBjYWxsYmFjayBlbnRyeS4KLQkgKi8KLQl2bV9mYXVsdF90ICgqZmF1bHQpKHN0cnVj dCBobW1fZGV2bWVtICpkZXZtZW0sCi0JCSAgICAgc3RydWN0IHZtX2FyZWFfc3RydWN0ICp2bWEs Ci0JCSAgICAgdW5zaWduZWQgbG9uZyBhZGRyLAotCQkgICAgIGNvbnN0IHN0cnVjdCBwYWdlICpw YWdlLAotCQkgICAgIHVuc2lnbmVkIGludCBmbGFncywKLQkJICAgICBwbWRfdCAqcG1kcCk7Ci19 OwotCi0vKgotICogc3RydWN0IGhtbV9kZXZtZW0gLSB0cmFjayBkZXZpY2UgbWVtb3J5Ci0gKgot ICogQGNvbXBsZXRpb246IGNvbXBsZXRpb24gb2JqZWN0IGZvciBkZXZpY2UgbWVtb3J5Ci0gKiBA cGZuX2ZpcnN0OiBmaXJzdCBwZm4gZm9yIHRoaXMgcmVzb3VyY2UgKHNldCBieSBobW1fZGV2bWVt X2FkZCgpKQotICogQHBmbl9sYXN0OiBsYXN0IHBmbiBmb3IgdGhpcyByZXNvdXJjZSAoc2V0IGJ5 IGhtbV9kZXZtZW1fYWRkKCkpCi0gKiBAcmVzb3VyY2U6IElPIHJlc291cmNlIHJlc2VydmVkIGZv ciB0aGlzIGNodW5rIG9mIG1lbW9yeQotICogQHBhZ2VtYXA6IGRldmljZSBwYWdlIG1hcCBmb3Ig dGhhdCBjaHVuawotICogQGRldmljZTogZGV2aWNlIHRvIGJpbmQgcmVzb3VyY2UgdG8KLSAqIEBv cHM6IG1lbW9yeSBvcGVyYXRpb25zIGNhbGxiYWNrCi0gKiBAcmVmOiBwZXIgQ1BVIHJlZmNvdW50 Ci0gKiBAcGFnZV9mYXVsdDogY2FsbGJhY2sgd2hlbiBDUFUgZmF1bHQgb24gYW4gdW5hZGRyZXNz YWJsZSBkZXZpY2UgcGFnZQotICoKLSAqIFRoaXMgaXMgYSBoZWxwZXIgc3RydWN0dXJlIGZvciBk ZXZpY2UgZHJpdmVycyB0aGF0IGRvIG5vdCB3aXNoIHRvIGltcGxlbWVudAotICogdGhlIGdvcnkg ZGV0YWlscyByZWxhdGVkIHRvIGhvdHBsdWdnaW5nIG5ldyBtZW1veSBhbmQgYWxsb2NhdGluZyBz dHJ1Y3QKLSAqIHBhZ2VzLgotICoKLSAqIERldmljZSBkcml2ZXJzIGNhbiBkaXJlY3RseSB1c2Ug Wk9ORV9ERVZJQ0UgbWVtb3J5IG9uIHRoZWlyIG93biBpZiB0aGV5Ci0gKiB3aXNoIHRvIGRvIHNv LgotICoKLSAqIFRoZSBwYWdlX2ZhdWx0KCkgY2FsbGJhY2sgbXVzdCBtaWdyYXRlIHBhZ2UgYmFj aywgZnJvbSBkZXZpY2UgbWVtb3J5IHRvCi0gKiBzeXN0ZW0gbWVtb3J5LCBzbyB0aGF0IHRoZSBD UFUgY2FuIGFjY2VzcyBpdC4gVGhpcyBtaWdodCBmYWlsIGZvciB2YXJpb3VzCi0gKiByZWFzb25z IChkZXZpY2UgaXNzdWVzLCAgZGV2aWNlIGhhdmUgYmVlbiB1bnBsdWdnZWQsIC4uLikuIFdoZW4g c3VjaCBlcnJvcgotICogY29uZGl0aW9ucyBoYXBwZW4sIHRoZSBwYWdlX2ZhdWx0KCkgY2FsbGJh Y2sgbXVzdCByZXR1cm4gVk1fRkFVTFRfU0lHQlVTIGFuZAotICogc2V0IHRoZSBDUFUgcGFnZSB0 YWJsZSBlbnRyeSB0byAicG9pc29uZWQiLgotICoKLSAqIE5vdGUgdGhhdCBiZWNhdXNlIG1lbW9y eSBjZ3JvdXAgY2hhcmdlcyBhcmUgdHJhbnNmZXJyZWQgdG8gdGhlIGRldmljZSBtZW1vcnksCi0g KiB0aGlzIHNob3VsZCBuZXZlciBmYWlsIGR1ZSB0byBtZW1vcnkgcmVzdHJpY3Rpb25zLiBIb3dl dmVyLCBhbGxvY2F0aW9uCi0gKiBvZiBhIHJlZ3VsYXIgc3lzdGVtIHBhZ2UgbWlnaHQgc3RpbGwg ZmFpbCBiZWNhdXNlIHdlIGFyZSBvdXQgb2YgbWVtb3J5LiBJZgotICogdGhhdCBoYXBwZW5zLCB0 aGUgcGFnZV9mYXVsdCgpIGNhbGxiYWNrIG11c3QgcmV0dXJuIFZNX0ZBVUxUX09PTS4KLSAqCi0g KiBUaGUgcGFnZV9mYXVsdCgpIGNhbGxiYWNrIGNhbiBhbHNvIHRyeSB0byBtaWdyYXRlIGJhY2sg bXVsdGlwbGUgcGFnZXMgaW4gb25lCi0gKiBjaHVuaywgYXMgYW4gb3B0aW1pemF0aW9uLiBJdCBt dXN0LCBob3dldmVyLCBwcmlvcml0aXplIHRoZSBmYXVsdGluZyBhZGRyZXNzCi0gKiBvdmVyIGFs bCB0aGUgb3RoZXJzLgotICovCi0KLXN0cnVjdCBobW1fZGV2bWVtIHsKLQlzdHJ1Y3QgY29tcGxl dGlvbgkJY29tcGxldGlvbjsKLQl1bnNpZ25lZCBsb25nCQkJcGZuX2ZpcnN0OwotCXVuc2lnbmVk IGxvbmcJCQlwZm5fbGFzdDsKLQlzdHJ1Y3QgcmVzb3VyY2UJCQkqcmVzb3VyY2U7Ci0Jc3RydWN0 IGRldmljZQkJCSpkZXZpY2U7Ci0Jc3RydWN0IGRldl9wYWdlbWFwCQlwYWdlbWFwOwotCWNvbnN0 IHN0cnVjdCBobW1fZGV2bWVtX29wcwkqb3BzOwotCXN0cnVjdCBwZXJjcHVfcmVmCQlyZWY7Ci19 OwotCi0vKgotICogVG8gYWRkIChob3RwbHVnKSBkZXZpY2UgbWVtb3J5LCBITU0gYXNzdW1lcyB0 aGF0IHRoZXJlIGlzIG5vIHJlYWwgcmVzb3VyY2UKLSAqIHRoYXQgcmVzZXJ2ZXMgYSByYW5nZSBp biB0aGUgcGh5c2ljYWwgYWRkcmVzcyBzcGFjZSAodGhpcyBpcyBpbnRlbmRlZCB0byBiZQotICog dXNlIGJ5IHVuYWRkcmVzc2FibGUgZGV2aWNlIG1lbW9yeSkuIEl0IHdpbGwgcmVzZXJ2ZSBhIHBo eXNpY2FsIHJhbmdlIGJpZwotICogZW5vdWdoIGFuZCBhbGxvY2F0ZSBzdHJ1Y3QgcGFnZSBmb3Ig aXQuCi0gKgotICogVGhlIGRldmljZSBkcml2ZXIgY2FuIHdyYXAgdGhlIGhtbV9kZXZtZW0gc3Ry dWN0IGluc2lkZSBhIHByaXZhdGUgZGV2aWNlCi0gKiBkcml2ZXIgc3RydWN0LgotICovCi1zdHJ1 Y3QgaG1tX2Rldm1lbSAqaG1tX2Rldm1lbV9hZGQoY29uc3Qgc3RydWN0IGhtbV9kZXZtZW1fb3Bz ICpvcHMsCi0JCQkJICBzdHJ1Y3QgZGV2aWNlICpkZXZpY2UsCi0JCQkJICB1bnNpZ25lZCBsb25n IHNpemUpOwotCiAvKgogICogaG1tX2Rldm1lbV9wYWdlX3NldF9kcnZkYXRhIC0gc2V0IHBlci1w YWdlIGRyaXZlciBkYXRhIGZpZWxkCiAgKgpkaWZmIC0tZ2l0IGEvbW0vaG1tLmMgYi9tbS9obW0u YwppbmRleCAwZWYxYTE5MjFhZmIuLjE3ZWQwODBkOWMzMiAxMDA2NDQKLS0tIGEvbW0vaG1tLmMK KysrIGIvbW0vaG1tLmMKQEAgLTEzMjQsMTEzICsxMzI0LDMgQEAgbG9uZyBobW1fcmFuZ2VfZG1h X3VubWFwKHN0cnVjdCBobW1fcmFuZ2UgKnJhbmdlLAogfQogRVhQT1JUX1NZTUJPTChobW1fcmFu Z2VfZG1hX3VubWFwKTsKICNlbmRpZiAvKiBJU19FTkFCTEVEKENPTkZJR19ITU1fTUlSUk9SKSAq LwotCi0KLSNpZiBJU19FTkFCTEVEKENPTkZJR19ERVZJQ0VfUFJJVkFURSkgfHwgIElTX0VOQUJM RUQoQ09ORklHX0RFVklDRV9QVUJMSUMpCi1zdGF0aWMgdm9pZCBobW1fZGV2bWVtX3JlZl9yZWxl YXNlKHN0cnVjdCBwZXJjcHVfcmVmICpyZWYpCi17Ci0Jc3RydWN0IGhtbV9kZXZtZW0gKmRldm1l bTsKLQotCWRldm1lbSA9IGNvbnRhaW5lcl9vZihyZWYsIHN0cnVjdCBobW1fZGV2bWVtLCByZWYp OwotCWNvbXBsZXRlKCZkZXZtZW0tPmNvbXBsZXRpb24pOwotfQotCi1zdGF0aWMgdm9pZCBobW1f ZGV2bWVtX3JlZl9leGl0KHN0cnVjdCBkZXZfcGFnZW1hcCAqcGdtYXApCi17Ci0Jc3RydWN0IGht bV9kZXZtZW0gKmRldm1lbTsKLQotCWRldm1lbSA9IGNvbnRhaW5lcl9vZihwZ21hcCwgc3RydWN0 IGhtbV9kZXZtZW0sIHBhZ2VtYXApOwotCXdhaXRfZm9yX2NvbXBsZXRpb24oJmRldm1lbS0+Y29t cGxldGlvbik7Ci0JcGVyY3B1X3JlZl9leGl0KHBnbWFwLT5yZWYpOwotfQotCi1zdGF0aWMgdm9p ZCBobW1fZGV2bWVtX3JlZl9raWxsKHN0cnVjdCBkZXZfcGFnZW1hcCAqcGdtYXApCi17Ci0JcGVy Y3B1X3JlZl9raWxsKHBnbWFwLT5yZWYpOwotfQotCi1zdGF0aWMgdm1fZmF1bHRfdCBobW1fZGV2 bWVtX21pZ3JhdGVfdG9fcmFtKHN0cnVjdCB2bV9mYXVsdCAqdm1mKQotewotCXN0cnVjdCBobW1f ZGV2bWVtICpkZXZtZW0gPQotCQljb250YWluZXJfb2Yodm1mLT5wYWdlLT5wZ21hcCwgc3RydWN0 IGhtbV9kZXZtZW0sIHBhZ2VtYXApOwotCi0JcmV0dXJuIGRldm1lbS0+b3BzLT5mYXVsdChkZXZt ZW0sIHZtZi0+dm1hLCB2bWYtPmFkZHJlc3MsIHZtZi0+cGFnZSwKLQkJCXZtZi0+ZmxhZ3MsIHZt Zi0+cG1kKTsKLX0KLQotc3RhdGljIHZvaWQgaG1tX2Rldm1lbV9mcmVlKHN0cnVjdCBwYWdlICpw YWdlKQotewotCXN0cnVjdCBobW1fZGV2bWVtICpkZXZtZW0gPQotCQljb250YWluZXJfb2YocGFn ZS0+cGdtYXAsIHN0cnVjdCBobW1fZGV2bWVtLCBwYWdlbWFwKTsKLQotCWRldm1lbS0+b3BzLT5m cmVlKGRldm1lbSwgcGFnZSk7Ci19Ci0KLXN0YXRpYyBjb25zdCBzdHJ1Y3QgZGV2X3BhZ2VtYXBf b3BzIGhtbV9wYWdlbWFwX29wcyA9IHsKLQkucGFnZV9mcmVlCQk9IGhtbV9kZXZtZW1fZnJlZSwK LQkua2lsbAkJCT0gaG1tX2Rldm1lbV9yZWZfa2lsbCwKLQkuY2xlYW51cAkJPSBobW1fZGV2bWVt X3JlZl9leGl0LAotCS5taWdyYXRlX3RvX3JhbQkJPSBobW1fZGV2bWVtX21pZ3JhdGVfdG9fcmFt LAotfTsKLQotLyoKLSAqIGhtbV9kZXZtZW1fYWRkKCkgLSBob3RwbHVnIFpPTkVfREVWSUNFIG1l bW9yeSBmb3IgZGV2aWNlIG1lbW9yeQotICoKLSAqIEBvcHM6IG1lbW9yeSBldmVudCBkZXZpY2Ug ZHJpdmVyIGNhbGxiYWNrIChzZWUgc3RydWN0IGhtbV9kZXZtZW1fb3BzKQotICogQGRldmljZTog ZGV2aWNlIHN0cnVjdCB0byBiaW5kIHRoZSByZXNvdXJjZSB0b28KLSAqIEBzaXplOiBzaXplIGlu IGJ5dGVzIG9mIHRoZSBkZXZpY2UgbWVtb3J5IHRvIGFkZAotICogUmV0dXJuOiBwb2ludGVyIHRv IG5ldyBobW1fZGV2bWVtIHN0cnVjdCBFUlJfUFRSIG90aGVyd2lzZQotICoKLSAqIFRoaXMgZnVu Y3Rpb24gZmlyc3QgZmluZHMgYW4gZW1wdHkgcmFuZ2Ugb2YgcGh5c2ljYWwgYWRkcmVzcyBiaWcg ZW5vdWdoIHRvCi0gKiBjb250YWluIHRoZSBuZXcgcmVzb3VyY2UsIGFuZCB0aGVuIGhvdHBsdWdz IGl0IGFzIFpPTkVfREVWSUNFIG1lbW9yeSwgd2hpY2gKLSAqIGluIHR1cm4gYWxsb2NhdGVzIHN0 cnVjdCBwYWdlcy4gSXQgZG9lcyBub3QgZG8gYW55dGhpbmcgYmV5b25kIHRoYXQ7IGFsbAotICog ZXZlbnRzIGFmZmVjdGluZyB0aGUgbWVtb3J5IHdpbGwgZ28gdGhyb3VnaCB0aGUgdmFyaW91cyBj YWxsYmFja3MgcHJvdmlkZWQKLSAqIGJ5IGhtbV9kZXZtZW1fb3BzIHN0cnVjdC4KLSAqCi0gKiBE ZXZpY2UgZHJpdmVyIHNob3VsZCBjYWxsIHRoaXMgZnVuY3Rpb24gZHVyaW5nIGRldmljZSBpbml0 aWFsaXphdGlvbiBhbmQKLSAqIGlzIHRoZW4gcmVzcG9uc2libGUgb2YgbWVtb3J5IG1hbmFnZW1l bnQuIEhNTSBvbmx5IHByb3ZpZGVzIGhlbHBlcnMuCi0gKi8KLXN0cnVjdCBobW1fZGV2bWVtICpo bW1fZGV2bWVtX2FkZChjb25zdCBzdHJ1Y3QgaG1tX2Rldm1lbV9vcHMgKm9wcywKLQkJCQkgIHN0 cnVjdCBkZXZpY2UgKmRldmljZSwKLQkJCQkgIHVuc2lnbmVkIGxvbmcgc2l6ZSkKLXsKLQlzdHJ1 Y3QgaG1tX2Rldm1lbSAqZGV2bWVtOwotCXZvaWQgKnJlc3VsdDsKLQlpbnQgcmV0OwotCi0JZGV2 bWVtID0gZGV2bV9remFsbG9jKGRldmljZSwgc2l6ZW9mKCpkZXZtZW0pLCBHRlBfS0VSTkVMKTsK LQlpZiAoIWRldm1lbSkKLQkJcmV0dXJuIEVSUl9QVFIoLUVOT01FTSk7Ci0KLQlpbml0X2NvbXBs ZXRpb24oJmRldm1lbS0+Y29tcGxldGlvbik7Ci0JZGV2bWVtLT5wZm5fZmlyc3QgPSAtMVVMOwot CWRldm1lbS0+cGZuX2xhc3QgPSAtMVVMOwotCWRldm1lbS0+cmVzb3VyY2UgPSBOVUxMOwotCWRl dm1lbS0+ZGV2aWNlID0gZGV2aWNlOwotCWRldm1lbS0+b3BzID0gb3BzOwotCi0JcmV0ID0gcGVy Y3B1X3JlZl9pbml0KCZkZXZtZW0tPnJlZiwgJmhtbV9kZXZtZW1fcmVmX3JlbGVhc2UsCi0JCQkg ICAgICAwLCBHRlBfS0VSTkVMKTsKLQlpZiAocmV0KQotCQlyZXR1cm4gRVJSX1BUUihyZXQpOwot Ci0JZGV2bWVtLT5yZXNvdXJjZSA9IGRldm1fcmVxdWVzdF9mcmVlX21lbV9yZWdpb24oZGV2aWNl LCAmaW9tZW1fcmVzb3VyY2UsCi0JCQlzaXplKTsKLQlpZiAoSVNfRVJSKGRldm1lbS0+cmVzb3Vy Y2UpKQotCQlyZXR1cm4gRVJSX0NBU1QoZGV2bWVtLT5yZXNvdXJjZSk7Ci0JZGV2bWVtLT5wZm5f Zmlyc3QgPSBkZXZtZW0tPnJlc291cmNlLT5zdGFydCA+PiBQQUdFX1NISUZUOwotCWRldm1lbS0+ cGZuX2xhc3QgPSBkZXZtZW0tPnBmbl9maXJzdCArCi0JCQkgICAocmVzb3VyY2Vfc2l6ZShkZXZt ZW0tPnJlc291cmNlKSA+PiBQQUdFX1NISUZUKTsKLQotCWRldm1lbS0+cGFnZW1hcC50eXBlID0g TUVNT1JZX0RFVklDRV9QUklWQVRFOwotCWRldm1lbS0+cGFnZW1hcC5yZXMgPSAqZGV2bWVtLT5y ZXNvdXJjZTsKLQlkZXZtZW0tPnBhZ2VtYXAub3BzID0gJmhtbV9wYWdlbWFwX29wczsKLQlkZXZt ZW0tPnBhZ2VtYXAucmVmID0gJmRldm1lbS0+cmVmOwotCi0JcmVzdWx0ID0gZGV2bV9tZW1yZW1h cF9wYWdlcyhkZXZtZW0tPmRldmljZSwgJmRldm1lbS0+cGFnZW1hcCk7Ci0JaWYgKElTX0VSUihy ZXN1bHQpKQotCQlyZXR1cm4gcmVzdWx0OwotCXJldHVybiBkZXZtZW07Ci19Ci1FWFBPUlRfU1lN Qk9MX0dQTChobW1fZGV2bWVtX2FkZCk7Ci0jZW5kaWYgLyogQ09ORklHX0RFVklDRV9QUklWQVRF IHx8IENPTkZJR19ERVZJQ0VfUFVCTElDICovCi0tIAoyLjIwLjEKCl9fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCk5vdXZlYXUgbWFpbGluZyBsaXN0Ck5vdXZl YXVAbGlzdHMuZnJlZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlzdHMuZnJlZWRlc2t0b3Aub3JnL21h aWxtYW4vbGlzdGluZm8vbm91dmVhdQ==