linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jerome Glisse <jglisse@redhat.com>
To: Balbir Singh <bsingharora@gmail.com>
Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, John Hubbard <jhubbard@nvidia.com>,
	Jatin Kumar <jakumar@nvidia.com>,
	Mark Hairgrove <mhairgrove@nvidia.com>,
	Sherry Cheung <SCheung@nvidia.com>,
	Subhash Gutti <sgutti@nvidia.com>
Subject: Re: [HMM v13 16/18] mm/hmm/migrate: new memory migration helper for use with device memory
Date: Mon, 21 Nov 2016 00:31:26 -0500	[thread overview]
Message-ID: <20161121053125.GG7872@redhat.com> (raw)
In-Reply-To: <fd02ccec-800f-e0ff-a51f-b42df13b4c9b@gmail.com>

On Mon, Nov 21, 2016 at 02:30:46PM +1100, Balbir Singh wrote:
> On 19/11/16 05:18, Jérôme Glisse wrote:

[...]

> > +
> > +
> > +#if defined(CONFIG_HMM)
> > +struct hmm_migrate {
> > +	struct vm_area_struct	*vma;
> > +	unsigned long		start;
> > +	unsigned long		end;
> > +	unsigned long		npages;
> > +	hmm_pfn_t		*pfns;
> 
> I presume the destination is pfns[] or is the source?

Both when alloca_and_copy() is call it is fill with source memory, but once
that callback returns it must have set the destination memory inside that
array. This is what i discussed with Aneesh in this thread.

> > +};
> > +
> > +static int hmm_collect_walk_pmd(pmd_t *pmdp,
> > +				unsigned long start,
> > +				unsigned long end,
> > +				struct mm_walk *walk)
> > +{
> > +	struct hmm_migrate *migrate = walk->private;
> > +	struct mm_struct *mm = walk->vma->vm_mm;
> > +	unsigned long addr = start;
> > +	spinlock_t *ptl;
> > +	hmm_pfn_t *pfns;
> > +	int pages = 0;
> > +	pte_t *ptep;
> > +
> > +again:
> > +	if (pmd_none(*pmdp))
> > +		return 0;
> > +
> > +	split_huge_pmd(walk->vma, pmdp, addr);
> > +	if (pmd_trans_unstable(pmdp))
> > +		goto again;
> > +
> 
> OK., so we always split THP before migration

Yes because i need special swap entry and those does not exist for pmd.

> > +	pfns = &migrate->pfns[(addr - migrate->start) >> PAGE_SHIFT];
> > +	ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
> > +	arch_enter_lazy_mmu_mode();
> > +
> > +	for (; addr < end; addr += PAGE_SIZE, pfns++, ptep++) {
> > +		unsigned long pfn;
> > +		swp_entry_t entry;
> > +		struct page *page;
> > +		hmm_pfn_t flags;
> > +		bool write;
> > +		pte_t pte;
> > +
> > +		pte = ptep_get_and_clear(mm, addr, ptep);
> > +		if (!pte_present(pte)) {
> > +			if (pte_none(pte))
> > +				continue;
> > +
> > +			entry = pte_to_swp_entry(pte);
> > +			if (!is_device_entry(entry)) {
> > +				set_pte_at(mm, addr, ptep, pte);
> 
> Why hard code this, in general the ability to migrate a VMA
> start/end range seems like a useful API.

Some memory can not be migrated, can not migrate something that is already
being migrated or something that is swap or something that is bad memory
... I only try to migrate valid memory.

> > +				continue;
> > +			}
> > +
> > +			flags = HMM_PFN_DEVICE | HMM_PFN_UNADDRESSABLE;
> 
> Currently UNADDRESSABLE?

Yes, this is a special device swap entry and those it is unaddressable memory.
The destination memory might also be unaddressable (migrating from one device
to another device).


> > +			page = device_entry_to_page(entry);
> > +			write = is_write_device_entry(entry);
> > +			pfn = page_to_pfn(page);
> > +
> > +			if (!(page->pgmap->flags & MEMORY_MOVABLE)) {
> > +				set_pte_at(mm, addr, ptep, pte);
> > +				continue;
> > +			}
> > +
> > +		} else {
> > +			pfn = pte_pfn(pte);
> > +			page = pfn_to_page(pfn);
> > +			write = pte_write(pte);
> > +			flags = is_zone_device_page(page) ? HMM_PFN_DEVICE : 0;
> > +		}
> > +
> > +		/* FIXME support THP see hmm_migrate_page_check() */
> > +		if (PageTransCompound(page))
> > +			continue;
> 
> Didn't we split the THP above?

We splited huge pmd not huge page. Intention is to support huge page but i wanted
to keep patch simple and THP need special handling when it comes to refcount to
check for pin (either on huge page or on one of its tail page).

> 
> > +
> > +		*pfns = hmm_pfn_from_pfn(pfn) | HMM_PFN_MIGRATE | flags;
> > +		*pfns |= write ? HMM_PFN_WRITE : 0;
> > +		migrate->npages++;
> > +		get_page(page);
> > +
> > +		if (!trylock_page(page)) {
> > +			set_pte_at(mm, addr, ptep, pte);
> 
> put_page()?

No, we will try latter to lock the page and thus we want to keep a ref on the page.

> > +		} else {
> > +			pte_t swp_pte;
> > +
> > +			*pfns |= HMM_PFN_LOCKED;
> > +
> > +			entry = make_migration_entry(page, write);
> > +			swp_pte = swp_entry_to_pte(entry);
> > +			if (pte_soft_dirty(pte))
> > +				swp_pte = pte_swp_mksoft_dirty(swp_pte);
> > +			set_pte_at(mm, addr, ptep, swp_pte);
> > +
> > +			page_remove_rmap(page, false);
> > +			put_page(page);
> > +			pages++;
> > +		}
> > +	}
> > +
> > +	arch_leave_lazy_mmu_mode();
> > +	pte_unmap_unlock(ptep - 1, ptl);
> > +
> > +	/* Only flush the TLB if we actually modified any entries */
> > +	if (pages)
> > +		flush_tlb_range(walk->vma, start, end);
> > +
> > +	return 0;
> > +}
> > +
> > +static void hmm_migrate_collect(struct hmm_migrate *migrate)
> > +{
> > +	struct mm_walk mm_walk;
> > +
> > +	mm_walk.pmd_entry = hmm_collect_walk_pmd;
> > +	mm_walk.pte_entry = NULL;
> > +	mm_walk.pte_hole = NULL;
> > +	mm_walk.hugetlb_entry = NULL;
> > +	mm_walk.test_walk = NULL;
> > +	mm_walk.vma = migrate->vma;
> > +	mm_walk.mm = migrate->vma->vm_mm;
> > +	mm_walk.private = migrate;
> > +
> > +	mmu_notifier_invalidate_range_start(mm_walk.mm,
> > +					    migrate->start,
> > +					    migrate->end);
> > +	walk_page_range(migrate->start, migrate->end, &mm_walk);
> > +	mmu_notifier_invalidate_range_end(mm_walk.mm,
> > +					  migrate->start,
> > +					  migrate->end);
> > +}
> > +
> > +static inline bool hmm_migrate_page_check(struct page *page, int extra)
> > +{
> > +	/*
> > +	 * FIXME support THP (transparent huge page), it is bit more complex to
> > +	 * check them then regular page because they can be map with a pmd or
> > +	 * with a pte (split pte mapping).
> > +	 */
> > +	if (PageCompound(page))
> > +		return false;
> 
> PageTransCompound()?

Yes, right now i think on all arch it is equivalent.


> > +
> > +	if (is_zone_device_page(page))
> > +		extra++;
> > +
> > +	if ((page_count(page) - extra) > page_mapcount(page))
> > +		return false;
> > +
> > +	return true;
> > +}
> > +
> > +static void hmm_migrate_lock_and_isolate(struct hmm_migrate *migrate)
> > +{
> > +	unsigned long addr = migrate->start, i = 0;
> > +	struct mm_struct *mm = migrate->vma->vm_mm;
> > +	struct vm_area_struct *vma = migrate->vma;
> > +	unsigned long restore = 0;
> > +	bool allow_drain = true;
> > +
> > +	lru_add_drain();
> > +
> > +again:
> > +	for (; addr < migrate->end; addr += PAGE_SIZE, i++) {
> > +		struct page *page = hmm_pfn_to_page(migrate->pfns[i]);
> > +
> > +		if (!page)
> > +			continue;
> > +
> > +		if (!(migrate->pfns[i] & HMM_PFN_LOCKED)) {
> > +			lock_page(page);
> > +			migrate->pfns[i] |= HMM_PFN_LOCKED;
> > +		}
> > +
> > +		/* ZONE_DEVICE page are not on LRU */
> > +		if (is_zone_device_page(page))
> > +			goto check;
> > +
> > +		if (!PageLRU(page) && allow_drain) {
> > +			/* Drain CPU's pagevec so page can be isolated */
> > +			lru_add_drain_all();
> > +			allow_drain = false;
> > +			goto again;
> > +		}
> > +
> > +		if (isolate_lru_page(page)) {
> > +			migrate->pfns[i] &= ~HMM_PFN_MIGRATE;
> > +			migrate->npages--;
> > +			put_page(page);
> > +			restore++;
> > +		} else
> > +			/* Drop the reference we took in collect */
> > +			put_page(page);
> > +
> > +check:
> > +		if (!hmm_migrate_page_check(page, 1)) {
> > +			migrate->pfns[i] &= ~HMM_PFN_MIGRATE;
> > +			migrate->npages--;
> > +			restore++;
> > +		}
> > +	}
> > +
> > +	if (!restore)
> > +		return;
> > +
> > +	for (addr = migrate->start, i = 0; addr < migrate->end;) {
> > +		struct page *page = hmm_pfn_to_page(migrate->pfns[i]);
> > +		unsigned long next, restart;
> > +		spinlock_t *ptl;
> > +		pgd_t *pgdp;
> > +		pud_t *pudp;
> > +		pmd_t *pmdp;
> > +		pte_t *ptep;
> > +
> > +		if (!page || !(migrate->pfns[i] & HMM_PFN_MIGRATE)) {
> > +			addr += PAGE_SIZE;
> > +			i++;
> > +			continue;
> > +		}
> > +
> > +		restart = addr;
> > +		pgdp = pgd_offset(mm, addr);
> > +		if (!pgdp || pgd_none_or_clear_bad(pgdp)) {
> > +			addr = pgd_addr_end(addr, migrate->end);
> > +			i = (addr - migrate->start) >> PAGE_SHIFT;
> > +			continue;
> > +		}
> > +		pudp = pud_offset(pgdp, addr);
> > +		if (!pudp || pud_none(*pudp)) {
> > +			addr = pgd_addr_end(addr, migrate->end);
> > +			i = (addr - migrate->start) >> PAGE_SHIFT;
> > +			continue;
> > +		}
> > +		pmdp = pmd_offset(pudp, addr);
> > +		next = pmd_addr_end(addr, migrate->end);
> > +		if (!pmdp || pmd_none(*pmdp) || pmd_trans_huge(*pmdp)) {
> > +			addr = next;
> > +			i = (addr - migrate->start) >> PAGE_SHIFT;
> > +			continue;
> > +		}
> > +		ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
> > +		for (; addr < next; addr += PAGE_SIZE, i++, ptep++) {
> > +			swp_entry_t entry;
> > +			bool write;
> > +			pte_t pte;
> > +
> > +			page = hmm_pfn_to_page(migrate->pfns[i]);
> > +			if (!page || (migrate->pfns[i] & HMM_PFN_MIGRATE))
> > +				continue;
> > +
> > +			write = migrate->pfns[i] & HMM_PFN_WRITE;
> > +			write &= (vma->vm_flags & VM_WRITE);
> > +
> > +			/* Here it means pte must be a valid migration entry */
> > +			pte = ptep_get_and_clear(mm, addr, ptep);
> > +			if (pte_none(pte) || pte_present(pte))
> > +				/* SOMETHING BAD IS GOING ON ! */
> > +				continue;
> > +			entry = pte_to_swp_entry(pte);
> > +			if (!is_migration_entry(entry))
> > +				/* SOMETHING BAD IS GOING ON ! */
> > +				continue;
> > +
> > +			if (is_zone_device_page(page) &&
> > +			    !is_addressable_page(page)) {
> > +				entry = make_device_entry(page, write);
> > +				pte = swp_entry_to_pte(entry);
> > +			} else {
> > +				pte = mk_pte(page, vma->vm_page_prot);
> > +				pte = pte_mkold(pte);
> > +				if (write)
> > +					pte = pte_mkwrite(pte);
> > +			}
> > +			if (pte_swp_soft_dirty(*ptep))
> > +				pte = pte_mksoft_dirty(pte);
> > +
> > +			get_page(page);
> > +			set_pte_at(mm, addr, ptep, pte);
> > +			if (PageAnon(page))
> > +				page_add_anon_rmap(page, vma, addr, false);
> > +			else
> > +				page_add_file_rmap(page, false);
> 
> Why do we do the rmap bits here?

Because we did page_remove_rmap() in hmm_migrate_collect() so we need to restore
rmap.


> > +		}
> > +		pte_unmap_unlock(ptep - 1, ptl);
> > +
> > +		addr = restart;
> > +		i = (addr - migrate->start) >> PAGE_SHIFT;
> > +		for (; addr < next && restore; addr += PAGE_SHIFT, i++) {
> > +			page = hmm_pfn_to_page(migrate->pfns[i]);
> > +			if (!page || (migrate->pfns[i] & HMM_PFN_MIGRATE))
> > +				continue;
> > +
> > +			migrate->pfns[i] = 0;
> > +			unlock_page(page);
> > +			restore--;
> > +
> > +			if (is_zone_device_page(page)) {
> > +				put_page(page);
> > +				continue;
> > +			}
> > +
> > +			putback_lru_page(page);
> > +		}
> > +
> > +		if (!restore)
> > +			break;
> > +	}
> > +}
> > +
> > +static void hmm_migrate_unmap(struct hmm_migrate *migrate)
> > +{
> > +	int flags = TTU_MIGRATION | TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS;
> > +	unsigned long addr = migrate->start, i = 0, restore = 0;
> > +
> > +	for (; addr < migrate->end; addr += PAGE_SIZE, i++) {
> > +		struct page *page = hmm_pfn_to_page(migrate->pfns[i]);
> > +
> > +		if (!page || !(migrate->pfns[i] & HMM_PFN_MIGRATE))
> > +			continue;
> > +
> > +		try_to_unmap(page, flags);
> > +		if (page_mapped(page) || !hmm_migrate_page_check(page, 1)) {
> > +			migrate->pfns[i] &= ~HMM_PFN_MIGRATE;
> > +			migrate->npages--;
> > +			restore++;
> > +		}
> > +	}
> > +
> > +	for (; (addr < migrate->end) && restore; addr += PAGE_SIZE, i++) {
> > +		struct page *page = hmm_pfn_to_page(migrate->pfns[i]);
> > +
> > +		if (!page || (migrate->pfns[i] & HMM_PFN_MIGRATE))
> > +			continue;
> > +
> > +		remove_migration_ptes(page, page, false);
> > +
> > +		migrate->pfns[i] = 0;
> > +		unlock_page(page);
> > +		restore--;
> > +
> > +		if (is_zone_device_page(page)) {
> > +			put_page(page);
> > +			continue;
> > +		}
> > +
> > +		putback_lru_page(page);
> > +	}
> > +}
> > +
> > +static void hmm_migrate_struct_page(struct hmm_migrate *migrate)
> > +{
> > +	unsigned long addr = migrate->start, i = 0;
> > +	struct mm_struct *mm = migrate->vma->vm_mm;
> > +
> > +	for (; addr < migrate->end;) {
> > +		unsigned long next;
> > +		pgd_t *pgdp;
> > +		pud_t *pudp;
> > +		pmd_t *pmdp;
> > +		pte_t *ptep;
> > +
> > +		pgdp = pgd_offset(mm, addr);
> > +		if (!pgdp || pgd_none_or_clear_bad(pgdp)) {
> > +			addr = pgd_addr_end(addr, migrate->end);
> > +			i = (addr - migrate->start) >> PAGE_SHIFT;
> > +			continue;
> > +		}
> > +		pudp = pud_offset(pgdp, addr);
> > +		if (!pudp || pud_none(*pudp)) {
> > +			addr = pgd_addr_end(addr, migrate->end);
> > +			i = (addr - migrate->start) >> PAGE_SHIFT;
> > +			continue;
> > +		}
> > +		pmdp = pmd_offset(pudp, addr);
> > +		next = pmd_addr_end(addr, migrate->end);
> > +		if (!pmdp || pmd_none(*pmdp) || pmd_trans_huge(*pmdp)) {
> > +			addr = next;
> > +			i = (addr - migrate->start) >> PAGE_SHIFT;
> > +			continue;
> > +		}
> > +
> > +		/* No need to lock nothing can change from under us */
> > +		ptep = pte_offset_map(pmdp, addr);
> > +		for (; addr < next; addr += PAGE_SIZE, i++, ptep++) {
> > +			struct address_space *mapping;
> > +			struct page *newpage, *page;
> > +			swp_entry_t entry;
> > +			int r;
> > +
> > +			newpage = hmm_pfn_to_page(migrate->pfns[i]);
> > +			if (!newpage || !(migrate->pfns[i] & HMM_PFN_MIGRATE))
> > +				continue;
> > +			if (pte_none(*ptep) || pte_present(*ptep)) {
> > +				/* This should not happen but be nice */
> > +				migrate->pfns[i] = 0;
> > +				put_page(newpage);
> > +				continue;
> > +			}
> > +			entry = pte_to_swp_entry(*ptep);
> > +			if (!is_migration_entry(entry)) {
> > +				/* This should not happen but be nice */
> > +				migrate->pfns[i] = 0;
> > +				put_page(newpage);
> > +				continue;
> > +			}
> > +
> > +			page = migration_entry_to_page(entry);
> > +			mapping = page_mapping(page);
> > +
> > +			/*
> > +			 * For now only support private anonymous when migrating
> > +			 * to un-addressable device memory.
> 
> I thought HMM supported page cache migration as well.

Not for un-addressable memory. Un-addressable memory need more change to filesystem
to handle read/write and writeback. This will be part of a separate patchset.


> > +			 */
> > +			if (mapping && is_zone_device_page(newpage) &&
> > +			    !is_addressable_page(newpage)) {
> > +				migrate->pfns[i] &= ~HMM_PFN_MIGRATE;
> > +				continue;
> > +			}
> > +
> > +			r = migrate_page(mapping, newpage, page,
> > +					 MIGRATE_SYNC, false);
> > +			if (r != MIGRATEPAGE_SUCCESS)
> > +				migrate->pfns[i] &= ~HMM_PFN_MIGRATE;
> > +		}
> > +		pte_unmap(ptep - 1);
> > +	}
> > +}
> > +
> > +static void hmm_migrate_remove_migration_pte(struct hmm_migrate *migrate)
> > +{
> > +	unsigned long addr = migrate->start, i = 0;
> > +	struct mm_struct *mm = migrate->vma->vm_mm;
> > +
> > +	for (; addr < migrate->end;) {
> > +		unsigned long next;
> > +		pgd_t *pgdp;
> > +		pud_t *pudp;
> > +		pmd_t *pmdp;
> > +		pte_t *ptep;
> > +
> > +		pgdp = pgd_offset(mm, addr);
> > +		pudp = pud_offset(pgdp, addr);
> > +		pmdp = pmd_offset(pudp, addr);
> > +		next = pmd_addr_end(addr, migrate->end);
> > +
> > +		/* No need to lock nothing can change from under us */
> > +		ptep = pte_offset_map(pmdp, addr);
> > +		for (; addr < next; addr += PAGE_SIZE, i++, ptep++) {
> > +			struct page *page, *newpage;
> > +			swp_entry_t entry;
> > +
> > +			if (pte_none(*ptep) || pte_present(*ptep))
> > +				continue;
> > +			entry = pte_to_swp_entry(*ptep);
> > +			if (!is_migration_entry(entry))
> > +				continue;
> > +
> > +			page = migration_entry_to_page(entry);
> > +			newpage = hmm_pfn_to_page(migrate->pfns[i]);
> > +			if (!newpage)
> > +				newpage = page;
> > +			remove_migration_ptes(page, newpage, false);
> > +
> > +			migrate->pfns[i] = 0;
> > +			unlock_page(page);
> > +			migrate->npages--;
> > +
> > +			if (is_zone_device_page(page))
> > +				put_page(page);
> > +			else
> > +				putback_lru_page(page);
> > +
> > +			if (newpage != page) {
> > +				unlock_page(newpage);
> > +				if (is_zone_device_page(newpage))
> > +					put_page(newpage);
> > +				else
> > +					putback_lru_page(newpage);
> > +			}
> > +		}
> > +		pte_unmap(ptep - 1);
> > +	}
> > +}
> > +
> > +/*
> > + * hmm_vma_migrate() - migrate a range of memory inside vma using accel copy
> > + *
> > + * @ops: migration callback for allocating destination memory and copying
> > + * @vma: virtual memory area containing the range to be migrated
> > + * @start: start address of the range to migrate (inclusive)
> > + * @end: end address of the range to migrate (exclusive)
> > + * @pfns: array of hmm_pfn_t first containing source pfns then destination
> > + * @private: pointer passed back to each of the callback
> > + * Returns: 0 on success, error code otherwise
> > + *
> > + * This will try to migrate a range of memory using callback to allocate and
> > + * copy memory from source to destination. This function will first collect,
> > + * lock and unmap pages in the range and then call alloc_and_copy() callback
> > + * for device driver to allocate destination memory and copy from source.
> > + *
> > + * Then it will proceed and try to effectively migrate the page (struct page
> > + * metadata) a step that can fail for various reasons. Before updating CPU page
> > + * table it will call finalize_and_map() callback so that device driver can
> > + * inspect what have been successfully migrated and update its own page table
> > + * (this latter aspect is not mandatory and only make sense for some user of
> > + * this API).
> > + *
> > + * Finaly the function update CPU page table and unlock the pages before
> > + * returning 0.
> > + *
> > + * It will return an error code only if one of the argument is invalid.
> > + */
> > +int hmm_vma_migrate(const struct hmm_migrate_ops *ops,
> > +		    struct vm_area_struct *vma,
> > +		    unsigned long start,
> > +		    unsigned long end,
> > +		    hmm_pfn_t *pfns,
> > +		    void *private)
> > +{
> > +	struct hmm_migrate migrate;
> > +
> > +	/* Sanity check the arguments */
> > +	start &= PAGE_MASK;
> > +	end &= PAGE_MASK;
> > +	if (is_vm_hugetlb_page(vma) || (vma->vm_flags & VM_SPECIAL))
> > +		return -EINVAL;
> > +	if (!vma || !ops || !pfns || start >= end)
> > +		return -EINVAL;
> > +	if (start < vma->vm_start || start >= vma->vm_end)
> > +		return -EINVAL;
> > +	if (end <= vma->vm_start || end > vma->vm_end)
> > +		return -EINVAL;
> > +
> > +	migrate.start = start;
> > +	migrate.pfns = pfns;
> > +	migrate.npages = 0;
> > +	migrate.end = end;
> > +	migrate.vma = vma;
> > +
> > +	/* Collect, and try to unmap source pages */
> > +	hmm_migrate_collect(&migrate);
> > +	if (!migrate.npages)
> > +		return 0;
> > +
> > +	/* Lock and isolate page */
> > +	hmm_migrate_lock_and_isolate(&migrate);
> > +	if (!migrate.npages)
> > +		return 0;
> > +
> > +	/* Unmap pages */
> > +	hmm_migrate_unmap(&migrate);
> > +	if (!migrate.npages)
> > +		return 0;
> > +
> > +	/*
> > +	 * At this point pages are lock and unmap and thus they have stable
> > +	 * content and can safely be copied to destination memory that is
> > +	 * allocated by the callback.
> > +	 *
> > +	 * Note that migration can fail in hmm_migrate_struct_page() for each
> > +	 * individual page.
> > +	 */
> > +	ops->alloc_and_copy(vma, start, end, pfns, private);
> 
> What is the expectation from alloc_and_copy()? Can it fail?

It can fail, there is no global status it is all handled on individual page
basis. So for instance if a device can only allocate its device memory as 64
chunk than it  can migrate any chunk that match this constraint and fail for
anything smaller than that.


> > +
> > +	/* This does the real migration of struct page */
> > +	hmm_migrate_struct_page(&migrate);
> > +
> > +	ops->finalize_and_map(vma, start, end, pfns, private);
> 
> Is this just notification to the driver or more?

Just a notification to driver.

Cheers,
Jérôme

  reply	other threads:[~2016-11-21  5:31 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-18 18:18 [HMM v13 00/18] HMM (Heterogeneous Memory Management) v13 Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 01/18] mm/memory/hotplug: convert device parameter bool to set of flags Jérôme Glisse
2016-11-21  0:44   ` Balbir Singh
2016-11-21  4:53     ` Jerome Glisse
2016-11-21  6:57       ` Anshuman Khandual
2016-11-21 12:19         ` Jerome Glisse
2016-11-21  6:41   ` Anshuman Khandual
2016-11-21 12:27     ` Jerome Glisse
2016-11-22  5:35       ` Anshuman Khandual
2016-11-22 14:08         ` Jerome Glisse
2016-11-18 18:18 ` [HMM v13 02/18] mm/ZONE_DEVICE/unaddressable: add support for un-addressable device memory Jérôme Glisse
2016-11-21  8:06   ` Anshuman Khandual
2016-11-21 12:33     ` Jerome Glisse
2016-11-22  5:15       ` Anshuman Khandual
2016-11-18 18:18 ` [HMM v13 03/18] mm/ZONE_DEVICE/free_hot_cold_page: catch ZONE_DEVICE pages Jérôme Glisse
2016-11-21  8:18   ` Anshuman Khandual
2016-11-21 12:50     ` Jerome Glisse
2016-11-22  4:30       ` Anshuman Khandual
2016-11-18 18:18 ` [HMM v13 04/18] mm/ZONE_DEVICE/free-page: callback when page is freed Jérôme Glisse
2016-11-21  1:49   ` Balbir Singh
2016-11-21  4:57     ` Jerome Glisse
2016-11-21  8:26   ` Anshuman Khandual
2016-11-21 12:34     ` Jerome Glisse
2016-11-22  5:02       ` Anshuman Khandual
2016-11-18 18:18 ` [HMM v13 05/18] mm/ZONE_DEVICE/devmem_pages_remove: allow early removal of device memory Jérôme Glisse
2016-11-21 10:37   ` Anshuman Khandual
2016-11-21 12:39     ` Jerome Glisse
2016-11-22  4:54       ` Anshuman Khandual
2016-11-18 18:18 ` [HMM v13 06/18] mm/ZONE_DEVICE/unaddressable: add special swap for unaddressable Jérôme Glisse
2016-11-21  2:06   ` Balbir Singh
2016-11-21  5:05     ` Jerome Glisse
2016-11-22  2:19       ` Balbir Singh
2016-11-22 13:59         ` Jerome Glisse
2016-11-21 11:10     ` Anshuman Khandual
2016-11-21 10:58   ` Anshuman Khandual
2016-11-21 12:42     ` Jerome Glisse
2016-11-22  4:48       ` Anshuman Khandual
2016-11-24 13:56         ` Jerome Glisse
2016-11-18 18:18 ` [HMM v13 07/18] mm/ZONE_DEVICE/x86: add support for un-addressable device memory Jérôme Glisse
2016-11-21  2:08   ` Balbir Singh
2016-11-21  5:08     ` Jerome Glisse
2016-11-18 18:18 ` [HMM v13 08/18] mm/hmm: heterogeneous memory management (HMM for short) Jérôme Glisse
2016-11-21  2:29   ` Balbir Singh
2016-11-21  5:14     ` Jerome Glisse
2016-11-23  4:03   ` Anshuman Khandual
2016-11-27 13:10     ` Jerome Glisse
2016-11-28  2:58       ` Anshuman Khandual
2016-11-28  9:41         ` Jerome Glisse
2016-11-18 18:18 ` [HMM v13 09/18] mm/hmm/mirror: mirror process address space on device with HMM helpers Jérôme Glisse
2016-11-21  2:42   ` Balbir Singh
2016-11-21  5:18     ` Jerome Glisse
2016-11-18 18:18 ` [HMM v13 10/18] mm/hmm/mirror: add range lock helper, prevent CPU page table update for the range Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 11/18] mm/hmm/mirror: add range monitor helper, to monitor CPU page table update Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 12/18] mm/hmm/mirror: helper to snapshot CPU page table Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 13/18] mm/hmm/mirror: device page fault handler Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 14/18] mm/hmm/migrate: support un-addressable ZONE_DEVICE page in migration Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 15/18] mm/hmm/migrate: add new boolean copy flag to migratepage() callback Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 16/18] mm/hmm/migrate: new memory migration helper for use with device memory Jérôme Glisse
2016-11-18 19:57   ` Aneesh Kumar K.V
2016-11-18 20:15     ` Jerome Glisse
2016-11-19 14:32   ` Aneesh Kumar K.V
2016-11-19 17:17     ` Jerome Glisse
2016-11-20 18:21       ` Aneesh Kumar K.V
2016-11-20 20:06         ` Jerome Glisse
2016-11-21  3:30   ` Balbir Singh
2016-11-21  5:31     ` Jerome Glisse [this message]
2016-11-18 18:18 ` [HMM v13 17/18] mm/hmm/devmem: device driver helper to hotplug ZONE_DEVICE memory Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 18/18] mm/hmm/devmem: dummy HMM device as an helper for " Jérôme Glisse
2016-11-19  0:41 ` [HMM v13 00/18] HMM (Heterogeneous Memory Management) v13 John Hubbard
2016-11-19 14:50   ` Aneesh Kumar K.V
2016-11-23  9:16 ` Haggai Eran
2016-11-25 16:16   ` Jerome Glisse
2016-11-27 13:27     ` Haggai Eran

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161121053125.GG7872@redhat.com \
    --to=jglisse@redhat.com \
    --cc=SCheung@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=bsingharora@gmail.com \
    --cc=jakumar@nvidia.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhairgrove@nvidia.com \
    --cc=sgutti@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).