From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C55FBC433FE for ; Thu, 23 Sep 2021 05:43:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A735E611B0 for ; Thu, 23 Sep 2021 05:43:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239218AbhIWFpZ (ORCPT ); Thu, 23 Sep 2021 01:45:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229890AbhIWFpY (ORCPT ); Thu, 23 Sep 2021 01:45:24 -0400 Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com [IPv6:2a00:1450:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F1E6C061574 for ; Wed, 22 Sep 2021 22:43:53 -0700 (PDT) Received: by mail-wr1-x431.google.com with SMTP id i23so13733058wrb.2 for ; Wed, 22 Sep 2021 22:43:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=rnIcwu0FFo3Ne+0nuy0UQ27c0vRdOwWjvM3R97Jd11w=; b=tNj8lripLW6wTqHkpoYqCMMIyDoswe4J/v2ne3MxQ0V4Wk4olSLqM/CVIrsVW1Id3o Go8D0zvh+pxwrOt2cXs09oym4e3LVmoahhS9T5GwVqNqQqOA5mepYk4kn55sNTXtScvo sOoNAb4BuwzXpdzs9wU4xzpofnfw6ZmnrGPd+aK16elBnVLpWUrDFhjRm/k14vAOfa38 C13T98wU6xYqsYZJa0QGlAfd+DlBxConccVnpxF4hBtbwYERWNF5+MjVftcyu8mE5HJj s6ZEI4WxHPTx7qbmS33FNPnEkRx+d4lnEvu9Obr3HdWzTMF3PZOpgZeElyfm4uoHPPu6 hW9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=rnIcwu0FFo3Ne+0nuy0UQ27c0vRdOwWjvM3R97Jd11w=; b=GtBJYuWjYdKFs2wMEZLWTlBE4z76DOTuJ1rhRFsZpwYp828UmxQBKQYgTXcWN05LvL RYDmjt1ufdmTFRrWSi9pgaDKoMgG/y05uWbkGAs9wEsYr68MtOnz3FTopgbEqakeM5b2 iW0pNPzly/O1dH4wlMVA2/Muah4bpCNq5chIIZUKB/xGIBJogWuXy+rJA5IEmlduk3Hr f7oikRXQedeCDawhEcajQopQfEZx+aovhyGeSXTDrHCjywX4ie8IIbKuEHSodEwiScDz XKrYwduW6DkoAMfAYkmxl0quH1vd21jSECOkRbn6y9Xme1FnvtKhpPcLHCdU1YLcvmYD eYXQ== X-Gm-Message-State: AOAM530j1it4Fxsc8lqVGCWVGjbis6GOwVsD2hBt1o+edNZJ8hzAulV7 5lgr5eJXfYuIJ/XBwA/taCETKrnVkSG33DpxzPnWyw== X-Google-Smtp-Source: ABdhPJwqwukkcM772+ZmF0+qpiKuTreRYrVIpU4YlkYlD2u7bMA6YS94V5gD+3Hv+gYqH0B9a6NFzN0yKxC2wA3TD40= X-Received: by 2002:a05:600c:a05:: with SMTP id z5mr2426125wmp.73.1632375831462; Wed, 22 Sep 2021 22:43:51 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Jue Wang Date: Wed, 22 Sep 2021 22:43:40 -0700 Message-ID: Subject: Re: [PATCH 1/3] userfaultfd/selftests: fix feature support detection To: James Houghton Cc: Peter Xu , Axel Rasmussen , Andrew Morton , Shuah Khan , Linux MM , Linuxkselftest , LKML Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 22, 2021 at 9:18 PM James Houghton wrote: > > On Wed, Sep 22, 2021 at 4:49 PM Peter Xu wrote: > > > > On Wed, Sep 22, 2021 at 03:29:42PM -0700, Axel Rasmussen wrote: > > > On Wed, Sep 22, 2021 at 2:52 PM Peter Xu wrote: > > > > > > > > On Wed, Sep 22, 2021 at 01:54:53PM -0700, Axel Rasmussen wrote: > > > > > On Wed, Sep 22, 2021 at 10:33 AM Peter Xu wrote: > > > > > > > > > > > > Hello, Axel, > > > > > > > > > > > > On Wed, Sep 22, 2021 at 10:04:03AM -0700, Axel Rasmussen wrote: > > > > > > > Thanks for discussing the design Peter. I have some ideas which might > > > > > > > make for a nicer v2; I'll massage the code a bit and see what I can > > > > > > > come up with. > > > > > > > > > > > > Sure thing. Note again that as I don't have a strong opinion on that, feel > > > > > > free to keep it. However if you provide v2, I'll read. > > > > > > > > > > > > [off-topic below] > > > > > > > > > > > > Another thing I probably have forgot but need your confirmation is, when you > > > > > > worked on uffd minor mode, did you explicitly disable thp, or is it allowed? > > > > > > > > > > I gave a more detailed answer in the other thread, but: currently it > > > > > is allowed, but this was a bug / oversight on my part. :) THP collapse > > > > > can break the guarantees minor fault registration is trying to > > > > > provide. > > > > > > > > I've replied there: > > > > > > > > https://lore.kernel.org/linux-mm/YUueOUfoamxOvEyO@t490s/ > > > > > > > > We can try to keep the discussion unified there regarding this. > > > > > > > > > But there's another scenario: what if the collapse happened well > > > > > before registration happened? > > > > > > > > Maybe yes, but my understanding of the current uffd-minor scenario tells me > > > > that this is fine too. Meanwhile I actually have another idea regarding minor > > > > mode, please continue reading. > > > > > > > > Firstly, let me try to re-cap on how minor mode is used in your production > > > > systems: I believe there should have two processes A and B, if A is the main > > > > process, B could be the migration process. B migrates pages in the background, > > > > while A so far should have been stopped and never ran. When we want to start > > > > A, we should register A with uffd-minor upon the whole range (note: I think so > > > > far A does not have any pgtable mapped within uffd-minor range). Then any page > > > > access of A should kick B and asking "whether it is the latest page", if yes > > > > then UFFDIO_CONTINUE, if no then B modifies the page, plus UFFDIO_CONTINUE > > > > afterwards. Am I right above? > > > > > > > > So if that's the case, then A should have no page table at all. > > > > > > > > Then, is that a problem if the shmem file that A maps contains huge thps? I > > > > think no - because UFFDIO_CONTINUE will only install small pages. > > > > > > > > Let me know if I'm understanding it right above; I'll be happy to be corrected. > > > > > > Right, except that our use case is even more similar to QEMU: the code > > > doing UFFDIO_CONTINUE / demand paging, and the code running the vCPUs, > > > are in the same process (same mm) - just different threads. > > > > I see. > > > > > > > > > > > > > Actually besides this scenario, I'm also thinking of another scenario of using > > > > minor fault in a single process - that's mostly what QEMU is doing right now, > > > > as QEMU has the vcpu threads and migration thread sharing a single mm/pgtable. > > > > So I think it'll be great to have a new madvise(MADV_ZAP) which will tear down > > > > all the file-backed memory pgtables of a specific range. I think it'll suite > > > > perfectly for the minor fault use case, and it can be used for other things > > > > too. Let me know what you think about this idea, and whether that'll help in > > > > your case too (e.g., if you worry a current process A mapped huge shmem thp > > > > somewhere, we can use madvise(MADV_ZAP) to drop it). > > > > > > Yes, this would be convenient for our implementation too. :) There are > > > workarounds if the feature doesn't exist, but it would be nice to > > > have. > > > > Could I know what's the workaround? Normally if the workaround works solidly, > > then there's less need to introduce a kernel interface for that. Otherwise I'm > > glad to look into such a formal proposal. > > The workaround is, for the region that you want to zap, run through > this sequence of syscalls: mumap, mmap, and re-register with > userfaultfd if it was registered before. If we're using tmpfs, we can > use madvise(DONTNEED) instead, but this is kind of an abuse of the > API. I don't think there's a guarantee that the PTEs will get zapped, > but currently they will always get zapped if we're using tmpfs. I > really like the idea of adding a new madvise() mode that is guaranteed > to zap the PTEs. > > > > > > It's also useful for memory poisoning, I think, if the host > > > decides some page(s) are "bad" and wants to intercept any future guest > > > accesses to those page(s). > > > > Curious: isn't hwpoison information come from MCEs; or say, host kernel side? > > Then I thought the host kernel will have full control of it already. > > > > Or there's other way that the host can try to detect some pages are going to be > > rotten? So the userspace can do something before the kernel handles those > > exceptions? > > Here's a general idea of how we would like to use userfaultfd to support MPR: > > If a guest accesses a poisoned page for the first time, we will get an > MCE through the host kernel and send an MCE to the guest. The guest > will now no longer be able to access this page, and we have to enforce > this. After a live migration, the pages that were poisoned before > probably won't still be poisoned (from the host's perspective), so we > can't rely on the host kernel's MCE handling path. This is where > userfaultfd and this new madvise mode come in: we can just > madvise(MADV_ZAP) the poisoned page(s) on the target during a > migration. Now all accesses will be routed to the VMM and we can > inject an MCE. We don't *need* the new madvise mode, as we can also > use fallocate(PUNCH_HOLE) (works for tmpfs and hugetlbfs), but it > would be more convenient if we didn't have to use fallocate. > > Jue Wang can provide more context here, so I've cc'd him. There may be > some things I'm wrong about, so Jue feel free to correct me. > James is right. The page is marked PG_HWPoison in the source VM host's kernel. The need of intercepting guest accesses to it exist on the target VM host, where the same physical page is no longer poisoned. On the target host, the hypervisor needs to intercept all guest accesses to pages poisoned from the source VM host. > > - James > > > > > > -- > > Peter Xu > >