From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linutronix.de (193.142.43.55:993) by crypto-ml.lab.linutronix.de with IMAP4-SSL for ; 10 Oct 2019 17:37:39 -0000 Received: from us-smtp-1.mimecast.com ([207.211.31.81] helo=us-smtp-delivery-1.mimecast.com) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1iIcNd-00072q-EQ for speck@linutronix.de; Thu, 10 Oct 2019 19:37:38 +0200 Received: by mail-wm1-f72.google.com with SMTP id n3so2923263wmf.3 for ; Thu, 10 Oct 2019 10:37:30 -0700 (PDT) Received: from ?IPv6:2001:b07:6468:f312:ddc7:c53c:581a:7f3e? ([2001:b07:6468:f312:ddc7:c53c:581a:7f3e]) by smtp.gmail.com with ESMTPSA id 33sm14922067wra.41.2019.10.10.10.37.27 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 10 Oct 2019 10:37:27 -0700 (PDT) Subject: [MODERATED] Re: [PATCH v5 8/8] NX 8 From: Paolo Bonzini References: <1561989149-17323-1-git-send-email-pbonzini@redhat.com> <1561989149-17323-9-git-send-email-pbonzini@redhat.com> <8eab605b-df0e-74d0-e448-986149edf33e@redhat.com> <20191002203850.GA26522@guptapadev.amr> <7834bcbb-fbc1-2d57-bf3c-955d5169fe87@redhat.com> <20191007194555.GD5154@guptapadev.amr> <20191009144147.GC11840@guptapadev.amr> <3437a345-bc77-b43e-3b1b-1ae3961da70b@redhat.com> <20191010055343.GG11840@guptapadev.amr> <0c9644ff-f0e1-66bd-551f-9534e3d6737e@redhat.com> <35501bba-c087-8e86-363b-e21ec6a6ced1@redhat.com> Message-ID: Date: Thu, 10 Oct 2019 19:37:26 +0200 MIME-Version: 1.0 In-Reply-To: <35501bba-c087-8e86-363b-e21ec6a6ced1@redhat.com> Content-Type: multipart/mixed; boundary="1ory69pz1hNENaOYQcMQuksX6UjepDUYG"; protected-headers="v1" To: speck@linutronix.de List-ID: This is an OpenPGP/MIME encrypted message (RFC 4880 and 3156) --1ory69pz1hNENaOYQcMQuksX6UjepDUYG Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable On 10/10/19 18:50, Paolo Bonzini wrote: > On 10/10/19 18:14, Paolo Bonzini wrote: >> On 10/10/19 07:53, speck for Pawan Gupta wrote: >>> We can help debug the crash. Can you please share the series, >>> reproduction steps and the crash signature. >=20 > The bug is a race condition between kvm_mmu_zap_all and, well, > everything else. It is triggered when nx_huge_pages is clear/set while= > the recovery thread runs. Gah, figured it out. It's specific to v6. I'll post the fixed patches tomorrow, dinnertime now. Paolo > Paolo >=20 >> The reproduction steps for v5 are as follows: >> >> - grab the next branch of kvm-unit-tests.git[1] and build it >> >> - create a lot of hugepages, on my machine I use 40 GiB worth of them:= >> >> echo 20480 > /proc/sys/vm/nr_hugepages >> >> - load KVM with kvm.nx_huge_pages_recovery_period_secs=3D3 >> >> - run the following script >> >> while true; do >> echo N > /sys/module/kvm/parameters/nx_huge_pages; sleep 1 >> echo Y > /sys/module/kvm/parameters/nx_huge_pages; sleep 5 >> done >> >> - run the testcase with >> >> MEM=3D40960 # in megabytes >> qemu-kvm -nodefaults -vnc none -serial stdio -kernel x86/hugetext.fl= at >> -m $MEM -mem-path /dev/hugepages >> >> You can also add a WARN_ON_ONCE(!sp->lpage_disallowed) to >> kvm_recover_nx_lpages before the call to kvm_mmu_prepare_zap_page. As= >> soon as it triggers, of course everything will go downhill. >> >> Paolo >> >> [1] git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git >> >=20 >=20 --1ory69pz1hNENaOYQcMQuksX6UjepDUYG--