From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753175AbcKTSWA (ORCPT ); Sun, 20 Nov 2016 13:22:00 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:36925 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752607AbcKTSV6 (ORCPT ); Sun, 20 Nov 2016 13:21:58 -0500 From: "Aneesh Kumar K.V" To: Jerome Glisse Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, John Hubbard , Jatin Kumar , Mark Hairgrove , Sherry Cheung , Subhash Gutti Subject: Re: [HMM v13 16/18] mm/hmm/migrate: new memory migration helper for use with device memory In-Reply-To: <20161119171757.GA2194@redhat.com> References: <1479493107-982-1-git-send-email-jglisse@redhat.com> <1479493107-982-17-git-send-email-jglisse@redhat.com> <87a8cvmtfp.fsf@linux.vnet.ibm.com> <20161119171757.GA2194@redhat.com> Date: Sun, 20 Nov 2016 23:51:48 +0530 MIME-Version: 1.0 Content-Type: text/plain X-TM-AS-GCONF: 00 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16112018-0020-0000-0000-00000A4D2C39 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00006112; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000189; SDB=6.00783066; UDB=6.00378099; IPR=6.00560669; BA=6.00004894; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00013385; XFM=3.00000011; UTC=2016-11-20 18:21:55 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16112018-0021-0000-0000-0000576E3DF4 Message-Id: <87ziku1077.fsf@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-11-20_14:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=1 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1609300000 definitions=main-1611200333 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Jerome Glisse writes: ..... >> > + >> > + *pfns = hmm_pfn_from_pfn(pfn) | HMM_PFN_MIGRATE | flags; >> > + *pfns |= write ? HMM_PFN_WRITE : 0; >> > + migrate->npages++; >> > + get_page(page); >> > + >> > + if (!trylock_page(page)) { >> > + set_pte_at(mm, addr, ptep, pte); >> > + } else { >> > + pte_t swp_pte; >> > + >> > + *pfns |= HMM_PFN_LOCKED; >> > + >> > + entry = make_migration_entry(page, write); >> > + swp_pte = swp_entry_to_pte(entry); >> > + if (pte_soft_dirty(pte)) >> > + swp_pte = pte_swp_mksoft_dirty(swp_pte); >> > + set_pte_at(mm, addr, ptep, swp_pte); >> > + >> > + page_remove_rmap(page, false); >> > + put_page(page); >> > + pages++; >> > + } >> >> If this is an optimization, can we get that as a seperate patch with >> addtional comments. ? How does take a successful page lock implies it is >> not a shared mapping ? > > It can be a share mapping and that's fine, migration only fail if page is > pin. > In the previous mail you replied above trylock_page() usage is an optimization for the usual case where the memory is only use in one process and that no concurrent migration/memory event is happening. How did we know that it is only in use by one process. I got the part that if we can lock, and since we lock the page early, it avoid concurrent migration. But I am not sure about the use by one process part. > >> > + } >> > + >> > + arch_leave_lazy_mmu_mode(); >> > + pte_unmap_unlock(ptep - 1, ptl); >> > + >> > + /* Only flush the TLB if we actually modified any entries */ >> > + if (pages) >> > + flush_tlb_range(walk->vma, start, end); >> > + >> > + return 0; >> > +} >> >> >> So without the optimization the above function is suppose to raise the >> refcount and collect all possible pfns tha we can migrate in the array ? > > Yes correct, this function collect all page we can migrate in the range. > ..... > >> > +static void hmm_migrate_lock_and_isolate(struct hmm_migrate *migrate) >> > +{ >> > + unsigned long addr = migrate->start, i = 0; >> > + struct mm_struct *mm = migrate->vma->vm_mm; >> > + struct vm_area_struct *vma = migrate->vma; >> > + unsigned long restore = 0; >> > + bool allow_drain = true; >> > + >> > + lru_add_drain(); >> > + >> > +again: >> > + for (; addr < migrate->end; addr += PAGE_SIZE, i++) { >> > + struct page *page = hmm_pfn_to_page(migrate->pfns[i]); >> > + >> > + if (!page) >> > + continue; >> > + >> > + if (!(migrate->pfns[i] & HMM_PFN_LOCKED)) { >> > + lock_page(page); >> > + migrate->pfns[i] |= HMM_PFN_LOCKED; >> > + } >> >> What does taking a page_lock protect against ? Can we document that ? > > This usual page migration process like existing code, page lock protect against > anyone trying to map the page inside another process or at different address. It > also block few fs operations. I don't think there is a comprehensive list anywhere > but i can try to make one. I was comparing it against the trylock_page() usage above. But I guess documenting the page lock can be another patch. > >> > + >> > + /* ZONE_DEVICE page are not on LRU */ >> > + if (is_zone_device_page(page)) >> > + goto check; >> > + >> > + if (!PageLRU(page) && allow_drain) { >> > + /* Drain CPU's pagevec so page can be isolated */ >> > + lru_add_drain_all(); >> > + allow_drain = false; >> > + goto again; >> > + } >> > + >> > + if (isolate_lru_page(page)) { >> > + migrate->pfns[i] &= ~HMM_PFN_MIGRATE; >> > + migrate->npages--; >> > + put_page(page); >> > + restore++; >> > + } else >> > + /* Drop the reference we took in collect */ >> > + put_page(page); >> > + >> > +check: >> > + if (!hmm_migrate_page_check(page, 1)) { >> > + migrate->pfns[i] &= ~HMM_PFN_MIGRATE; >> > + migrate->npages--; >> > + restore++; >> > + } >> > + } >> > + > ..... >> > + } >> > + pte_unmap_unlock(ptep - 1, ptl); >> > + >> > + addr = restart; >> > + i = (addr - migrate->start) >> PAGE_SHIFT; >> > + for (; addr < next && restore; addr += PAGE_SHIFT, i++) { >> > + page = hmm_pfn_to_page(migrate->pfns[i]); >> > + if (!page || (migrate->pfns[i] & HMM_PFN_MIGRATE)) >> > + continue; >> > + >> > + migrate->pfns[i] = 0; >> > + unlock_page(page); >> > + restore--; >> > + >> > + if (is_zone_device_page(page)) { >> > + put_page(page); >> > + continue; >> > + } >> > + >> > + putback_lru_page(page); >> > + } >> > + >> > + if (!restore) >> > + break; >> > + } >> >> >> All the above restore won't be needed if we didn't do that migration >> entry setup in the first function right ? We just need to drop the >> refcount for pages that we failed to isolated ? No need to walk the page >> table etc ? > > Well the migration entry setup is important so that no concurrent migration > can race with each other, the one that set the migration entry first is the > one that win in respect of migration. Also the CPU page table entry need to > be clear so that page content is stable and DMA copy does not miss any data > left over in some cache. This is the part i am still tryint to understand. hmm_collect_walk_pmd(), did migration entry setup only in one process page table. So how can it prevent concurrent migration because one could initiate a migration using the va/mapping of another process. Isn't that page lock that is prevent concurrent migration ? ........ > >> Why are we walking the page table multiple times ? Is it that after >> alloc_copy the content of migrate->pfns pfn array is now the new pfns ? >> It is confusing that each of these functions walk one page table >> multiple times (even when page can be shared). I was expecting us to >> walk the page table once to collect the pfns/pages and then use that >> in rest of the calls. Any specific reason you choose to implement it >> this way ? > > Well you need to know the source and destination page, so either i have > 2 arrays one for source page and one for destination pages and then i do > not need to walk page table multiple time. But needing 2 arrays might be > problematic as here we want to migrate reasonable chunk ie few megabyte > hence there is a need for vmalloc. > > My advice to device driver was to pre-allocate this array once (maybe > preallocate couple of them). If you really prefer avoiding walking the > CPU page table over and over then i can switch to 2 arrays solutions. > Having two array makes it easy to follow the code. But otherwise I guess documenting the above usage of page table above the function will also help. ..... >> IMHO If we can get each of the above functions documented properly it will >> help with code review. Also if we can avoid that multiple page table >> walk, it will make it closer to the existing migration logic. >> > > What kind of documentation are you looking for ? I thought the high level overview > was enough as none of the function do anything out of the ordinary. Do you want > more inline documation ? Or a more verbose highlevel overview ? Inline documentation for functions will be useful. Also if you can split the hmm_collect_walk_pmd() optimization we discussed above into a separate patch I guess this will be lot easy to follow. I still haven't understood why we setup that migration entry early and that too only on one process page table. If we can explain that as a separate patch may be it will much easy to follow. -aneesh