From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755090AbcEBS45 (ORCPT ); Mon, 2 May 2016 14:56:57 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49233 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754792AbcEBS4v (ORCPT ); Mon, 2 May 2016 14:56:51 -0400 Date: Mon, 2 May 2016 20:56:49 +0200 From: Andrea Arcangeli To: "Kirill A. Shutemov" Cc: Jerome Glisse , Oleg Nesterov , Hugh Dickins , Linus Torvalds , Andrew Morton , Alex Williamson , kirill.shutemov@linux.intel.com, linux-kernel@vger.kernel.org, "linux-mm@kvack.org" Subject: Re: GUP guarantees wrt to userspace mappings redesign Message-ID: <20160502185649.GC12310@redhat.com> References: <20160428181726.GA2847@node.shutemov.name> <20160428125808.29ad59e5@t450s.home> <20160428232127.GL11700@redhat.com> <20160429005106.GB2847@node.shutemov.name> <20160428204542.5f2053f7@ul30vt.home> <20160429070611.GA4990@node.shutemov.name> <20160429163444.GM11700@redhat.com> <20160502104119.GA23305@node.shutemov.name> <20160502111513.GA4079@gmail.com> <20160502121402.GB23305@node.shutemov.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160502121402.GB23305@node.shutemov.name> User-Agent: Mutt/1.6.0 (2016-04-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 02, 2016 at 03:14:02PM +0300, Kirill A. Shutemov wrote: > Quick look around: > > - I don't see any check page_count() around __replace_page() in uprobes, > so it can easily replace pinned page. > > - KSM has the page_count() check, there's still race wrt GUP_fast: it can > take the pin between the check and establishing new pte entry. * Ok this is tricky, when get_user_pages_fast() run it doesn't * take any lock, therefore the check that we are going to make * with the pagecount against the mapcount is racey and * O_DIRECT can happen right after the check. * So we clear the pte and flush the tlb before the check * this assure us that no O_DIRECT can happen after the check * or in the middle of the check. */ entry = ptep_clear_flush_notify(vma, addr, ptep); KSM takes care of that or it wouldn't be safe if KSM was with memory under O_DIRECT. > - khugepaged: the same story as with KSM. In __collapse_huge_page_isolate we do: /* * cannot use mapcount: can't collapse if there's a gup pin. * The page must only be referenced by the scanned process * and page swap cache. */ if (page_count(page) != 1 + !!PageSwapCache(page)) { unlock_page(page); result = SCAN_PAGE_COUNT; goto out; } At that point the pmd has been zapped (pmdp_collapse_flush already run) and like for KSM case that is enough to ensure get_user_pages_fast can't succeed and it'll have to call into the slow get_user_pages. These two issues are not specific to vfio and IOMMUs, this is must be correct or O_DIRECT will generate data corruption in presence of KSM/khugepaged. Both looks fine to me. > I don't see how we can deliver on the guarantee, especially with lockless > GUP_fast. By zapping the pmd_trans_huge/pte and sending IPIs if needed (get_user_pages_fast runs with irq disabled), before checking page_count. With the RCU version of it it's the same, but instead of sending IPIs, we'll wait for a quiescient point to be sure of having flushed any concurrent get_user_pages_fast out of the other CPUs, before we proceed to check page_count (then no other get_user_pages_fast can increase the page count for this page on this "mm" anymore). That's how the guaranteed is provided against get_user_pages_fast.