From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 815D9C5DF63 for ; Wed, 6 Nov 2019 21:10:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4C8CE217D7 for ; Wed, 6 Nov 2019 21:10:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732012AbfKFVKf (ORCPT ); Wed, 6 Nov 2019 16:10:35 -0500 Received: from mx1.redhat.com ([209.132.183.28]:33898 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727921AbfKFVKe (ORCPT ); Wed, 6 Nov 2019 16:10:34 -0500 Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D83563DE20 for ; Wed, 6 Nov 2019 21:10:33 +0000 (UTC) Received: by mail-wr1-f69.google.com with SMTP id e3so12523868wrs.17 for ; Wed, 06 Nov 2019 13:10:33 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:openpgp:message-id :date:user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=C3LgI3FZCo4MFBwXeZWw6RGjP/s+c0ceBa3NjK+vj58=; b=E+y8OArjrQa2XysPoAihlxNotX54y249agq8eHsOo1ZplsqVvQndVUIeuqlRQtsq/c qHOeZNoQ7AtoDwB1rMWkDlakXyRIyj1BoFfAIC5GIPU5aJIY4GGRkw1GO1tGaqHmPYpo E6W44TUIhgBxNd2p57cRTvRrWcmEHECH9vzNtKz18/rF9V7T7+ATLzHQSnodZZrNlNOZ i6I61PDKl9OUc39j2zIumlsbkVH7oPhWF3/0VVE8IMHVf16yGmPwE8r8Dw9izOQ3DSnW YIZRt+bfQJ2WuhYIXZH1hPtry+ZZk/olXa5IjyKrhFaBZuCu51RZ1aO4TnV5yGKXNFXF QOdg== X-Gm-Message-State: APjAAAVaE43s/z8YhKFnCjY//12AuZrGqFlk6irckEniBz6gBWgL7CIk 6gNx5l7sPMWh6SaAbGShRUWL43irPXBnvyt36h9n+7oJI+YAla6/kzHnRqyORVsWpaMqmWxyKQP eAci57aazHtmgX3xtWH/NA8T2 X-Received: by 2002:a1c:ab0a:: with SMTP id u10mr4714785wme.0.1573074632302; Wed, 06 Nov 2019 13:10:32 -0800 (PST) X-Google-Smtp-Source: APXvYqy5XOZTrhpzonDUnIja4CnKgKdNvmgOSOp4dSYjni1Ag4Vj3+2b5CH/cGa2EdkBk6Mn3zZ5IQ== X-Received: by 2002:a1c:ab0a:: with SMTP id u10mr4714761wme.0.1573074631888; Wed, 06 Nov 2019 13:10:31 -0800 (PST) Received: from ?IPv6:2001:b07:6468:f312:4051:461:136e:3f74? ([2001:b07:6468:f312:4051:461:136e:3f74]) by smtp.gmail.com with ESMTPSA id t185sm4344643wmf.45.2019.11.06.13.10.30 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 06 Nov 2019 13:10:31 -0800 (PST) Subject: Re: [PATCH 1/2] KVM: MMU: Do not treat ZONE_DEVICE pages as being reserved To: Dan Williams , Sean Christopherson Cc: =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , KVM list , Linux Kernel Mailing List , Adam Borowski , David Hildenbrand References: <20191106170727.14457-1-sean.j.christopherson@intel.com> <20191106170727.14457-2-sean.j.christopherson@intel.com> From: Paolo Bonzini Openpgp: preference=signencrypt Message-ID: <1cf71906-ba99-e637-650f-fc08ac4f3d5f@redhat.com> Date: Wed, 6 Nov 2019 22:09:29 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/11/19 19:04, Dan Williams wrote: > On Wed, Nov 6, 2019 at 9:07 AM Sean Christopherson > wrote: >> >> Explicitly exempt ZONE_DEVICE pages from kvm_is_reserved_pfn() and >> instead manually handle ZONE_DEVICE on a case-by-case basis. For things >> like page refcounts, KVM needs to treat ZONE_DEVICE pages like normal >> pages, e.g. put pages grabbed via gup(). But KVM needs special handling >> in other flows where ZONE_DEVICE pages lack the underlying machinery, >> e.g. when setting accessed/dirty bits and shifting refcounts for >> transparent huge pages. >> >> This fixes a hang reported by Adam Borowski[*] in dev_pagemap_cleanup() >> when running a KVM guest backed with /dev/dax memory, as KVM straight up >> doesn't put any references to ZONE_DEVICE pages acquired by gup(). >> >> [*] http://lkml.kernel.org/r/20190919115547.GA17963@angband.pl >> >> Reported-by: Adam Borowski >> Debugged-by: David Hildenbrand >> Cc: Dan Williams >> Cc: stable@vger.kernel.org >> Signed-off-by: Sean Christopherson >> --- >> arch/x86/kvm/mmu.c | 8 ++++---- >> include/linux/kvm_host.h | 1 + >> virt/kvm/kvm_main.c | 19 +++++++++++++++---- >> 3 files changed, 20 insertions(+), 8 deletions(-) >> >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >> index 24c23c66b226..bf82b1f2e834 100644 >> --- a/arch/x86/kvm/mmu.c >> +++ b/arch/x86/kvm/mmu.c >> @@ -3306,7 +3306,7 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, >> * here. >> */ >> if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) && >> - level == PT_PAGE_TABLE_LEVEL && >> + !kvm_is_zone_device_pfn(pfn) && level == PT_PAGE_TABLE_LEVEL && >> PageTransCompoundMap(pfn_to_page(pfn)) && >> !mmu_gfn_lpage_is_disallowed(vcpu, gfn, PT_DIRECTORY_LEVEL)) { >> unsigned long mask; >> @@ -5914,9 +5914,9 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, >> * the guest, and the guest page table is using 4K page size >> * mapping if the indirect sp has level = 1. >> */ >> - if (sp->role.direct && >> - !kvm_is_reserved_pfn(pfn) && >> - PageTransCompoundMap(pfn_to_page(pfn))) { >> + if (sp->role.direct && !kvm_is_reserved_pfn(pfn) && >> + !kvm_is_zone_device_pfn(pfn) && >> + PageTransCompoundMap(pfn_to_page(pfn))) { >> pte_list_remove(rmap_head, sptep); >> >> if (kvm_available_flush_tlb_with_range()) >> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h >> index a817e446c9aa..4ad1cd7d2d4d 100644 >> --- a/include/linux/kvm_host.h >> +++ b/include/linux/kvm_host.h >> @@ -966,6 +966,7 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu); >> void kvm_vcpu_kick(struct kvm_vcpu *vcpu); >> >> bool kvm_is_reserved_pfn(kvm_pfn_t pfn); >> +bool kvm_is_zone_device_pfn(kvm_pfn_t pfn); >> >> struct kvm_irq_ack_notifier { >> struct hlist_node link; >> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c >> index b8534c6b8cf6..0a781b1fb8f0 100644 >> --- a/virt/kvm/kvm_main.c >> +++ b/virt/kvm/kvm_main.c >> @@ -151,12 +151,23 @@ __weak int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, >> >> bool kvm_is_reserved_pfn(kvm_pfn_t pfn) >> { >> + /* >> + * ZONE_DEVICE pages currently set PG_reserved, but from a refcounting >> + * perspective they are "normal" pages, albeit with slightly different >> + * usage rules. >> + */ >> if (pfn_valid(pfn)) >> - return PageReserved(pfn_to_page(pfn)); >> + return PageReserved(pfn_to_page(pfn)) && >> + !is_zone_device_page(pfn_to_page(pfn)); > > This is racy unless you can be certain that the pfn and resulting page > has already been pinned by get_user_pages(). What is the race exactly? In general KVM does not use pfn's until after having gotten them from get_user_pages (or follow_pfn for VM_IO | VM_PFNMAP vmas, for which get_user_pages fails, but this is not an issue here). It then creates the page tables and releases the reference to the struct page. Anything else happens _after_ the reference has been released, but still from an mmu notifier; this is why KVM uses pfn_to_page quite pervasively. If this is enough to avoid races, then I prefer Sean's patch. If it is racy, we need to fix kvm_set_pfn_accessed and kvm_set_pfn_dirty first, and second at transparent_hugepage_adjust and kvm_mmu_zap_collapsible_spte: - if accessed/dirty state need not be tracked properly for ZONE_DEVICE, then I suppose David's patch is okay (though I'd like to have a big comment explaining all the things that went on in these emails). If they need to work, however, Sean's patch 1 is the right thing to do. - if we need Sean's patch 1, but it is racy to use is_zone_device_page, we could first introduce a helper similar to kvm_is_hugepage_allowed() from his patch 2, but using pfn_to_online_page() to filter out ZONE_DEVICE pages. This would cover both transparent_hugepage_adjust and kvm_mmu_zap_collapsible_spte. > This is why I told David > to steer clear of adding more is_zone_device_page() usage, it's > difficult to audit. Without an existing pin the metadata to determine > whether a page is ZONE_DEVICE or not could be in the process of being > torn down. Ideally KVM would pass around a struct { struct page *page, > unsigned long pfn } tuple so it would not have to do this "recall > context" dance on every pfn operation. Unfortunately once KVM has created its own page tables, the struct page* reference is lost, as the PFN is the only thing that is stored in there. Paolo