linux-mips.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] KVM: Fix an issue in non-preemptible kernel.
@ 2019-09-02  9:02 Gary Fu
  2019-09-04 14:02 ` Paul Burton
  0 siblings, 1 reply; 4+ messages in thread
From: Gary Fu @ 2019-09-02  9:02 UTC (permalink / raw)
  To: linux-mips; +Cc: Paul Burton, Archer Yan, Gary Fu

Add a cond_resched() to give the scheduler a chance to run madvise
task to avoid endless loop here in non-preemptible kernel.

Otherwise, the kvm_mmu_notifier would have no chance to be descreased
to 0 by madvise task -> syscall -> zap_page_range ->
mmu_notifier_invalidate_range_end ->
__mmu_notifier_invalidate_range_end -> invalidate_range_end ->
kvm_mmu_notifier_invalidate_range_end, as the madvise task would be
scheduled when running unmap_single_vma -> unmap_page_range ->
zap_p4d_range -> zap_pud_range -> zap_pmd_range -> cond_resched which
is called before mmu_notifier_invalidate_range_end in zap_page_range.

Signed-off-by: Gary Fu <qfu@wavecomp.com>
---
 arch/mips/kvm/mmu.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index 97e538a8c1be..e52e63d225f4 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -746,6 +746,22 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa,
 		 */
 		spin_unlock(&kvm->mmu_lock);
 		kvm_release_pfn_clean(pfn);
+		/*
+		 * Add a cond_resched() to give the scheduler a chance to run
+		 * madvise task to avoid endless loop here in non-preemptible
+		 * kernel.
+		 * Otherwise, the kvm_mmu_notifier would have no chance to be
+		 * descreased to 0 by madvise task -> syscall -> zap_page_range
+		 * -> mmu_notifier_invalidate_range_end ->
+		 * __mmu_notifier_invalidate_range_end -> invalidate_range_end
+		 * -> kvm_mmu_notifier_invalidate_range_end, as the madvise task
+		 * would be scheduled when running unmap_single_vma ->
+		 * unmap_page_range -> zap_p4d_range -> zap_pud_range ->
+		 * zap_pmd_range -> cond_resched which is called before
+		 * mmu_notifier_invalidate_range_end in zap_page_range.
+		 */
+		if (need_resched())
+			cond_resched();
 		goto retry;
 	}
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread
* [PATCH] KVM: Fix an issue in non-preemptible kernel.
@ 2019-09-09  2:49 Gary Fu
  0 siblings, 0 replies; 4+ messages in thread
From: Gary Fu @ 2019-09-09  2:49 UTC (permalink / raw)
  To: linux-mips; +Cc: Paul Burton, jhogan, Archer Yan, Gary Fu

Add a cond_resched() to give the scheduler a chance to run madvise
task to avoid endless loop here in non-preemptible kernel.

Otherwise, the kvm_mmu_notifier would have no chance to be decreased
to 0 by madvise task -> syscall -> zap_page_range ->
mmu_notifier_invalidate_range_end ->
__mmu_notifier_invalidate_range_end -> invalidate_range_end ->
kvm_mmu_notifier_invalidate_range_end, as the madvise task would be
scheduled when running unmap_single_vma -> unmap_page_range ->
zap_p4d_range -> zap_pud_range -> zap_pmd_range -> cond_resched which
is called before mmu_notifier_invalidate_range_end in zap_page_range.

When handling GPA faults by creating a new GPA mapping in
kvm_mips_map_page, it will be retrying to get available page. In the
low memory case, it is waiting for the memory resources freed by
madvise syscall with MADV_DONTNEED (QEMU application -> madvise with
MADV_DONTNEED -> syscall -> madvise_vma -> madvise_dontneed_free ->
madvise_dontneed_single_vma -> zap_page_range). In zap_page_range,
after the TLB of given address range is cleared by unmap_single_vma,
it will call __mmu_notifier_invalidate_range_end which finally calls
kvm_mmu_notifier_invalidate_range_end to decrease mmu_notifier_count
to 0. The retrying loop in kvm_mips_map_page checks the
mmu_notifier_count and if the value is 0 which indicates that some
new page is available for mapping, it will jump out the retrying loop
and set up PTE for a new GPA mapping.
During the TLB clearing (in unmap_single_vma in madvise syscall)
mentioned above, it will call cond_resched() per PMD for avoiding
occupying CPU for a long time (in case of huge page range zapping).
When this happens in the non-preemptible kernel, the retrying loop in
kvm_mips_map_page will be running endlessly as there is no chance to
reschedule back to madvise syscall to run
__mmu_notifier_invalidate_range_end to decrease mmu_notifier_count so
that the value of mmu_notifier_count is always 1.
Adding a scheduling point before every retry in kvm_mips_map_page will
give the madvise syscall (invoked by QEMU) a chance to be re-scheduled
back to zap pages in the given range and clear mmu_notifier_count
value to let kvm_mips_map_page task jump out the loop.

Signed-off-by: Gary Fu <qfu@wavecomp.com>
---
 arch/mips/kvm/mmu.c | 48 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index 97e538a8c1be..26bac7e1ea85 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -746,6 +746,54 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa,
 		 */
 		spin_unlock(&kvm->mmu_lock);
 		kvm_release_pfn_clean(pfn);
+		/*
+		 * Add a cond_resched() to give the scheduler a chance to run
+		 * madvise task to avoid endless loop here in non-preemptible
+		 * kernel.
+		 * Otherwise, the kvm_mmu_notifier would have no chance to be
+		 * decreased to 0 by madvise task -> syscall -> zap_page_range
+		 * -> mmu_notifier_invalidate_range_end ->
+		 * __mmu_notifier_invalidate_range_end -> invalidate_range_end
+		 * -> kvm_mmu_notifier_invalidate_range_end, as the madvise task
+		 * would be scheduled when running unmap_single_vma ->
+		 * unmap_page_range -> zap_p4d_range -> zap_pud_range ->
+		 * zap_pmd_range -> cond_resched which is called before
+		 * mmu_notifier_invalidate_range_end in zap_page_range.
+		 *
+		 * When handling GPA faults by creating a new GPA mapping in
+		 * kvm_mips_map_page, it will be retrying to get available
+		 * pages.
+		 * In the low memory case, it is waiting for the memory
+		 * resources freed by madvise syscall with MADV_DONTNEED (QEMU
+		 * application -> madvise with MADV_DONTNEED -> syscall ->
+		 * madvise_vma -> madvise_dontneed_free ->
+		 * madvise_dontneed_single_vma -> zap_page_range). In
+		 * zap_page_range, after the TLB of given address range is
+		 * cleared by unmap_single_vma, it will call
+		 *  __mmu_notifier_invalidate_range_end which finally calls
+		 * kvm_mmu_notifier_invalidate_range_end to decrease
+		 * mmu_notifier_count to 0. The retrying loop in
+		 * kvm_mips_map_page checks the mmu_notifier_count and if the
+		 * value is 0 which indicates that some new page is available
+		 * for mapping, it will jump out the retrying loop and set up
+		 * PTE for a new GPA mapping.
+		 * During the TLB clearing (in unmap_single_vma in madvise
+		 * syscall) mentioned above, it will call cond_resched() per
+		 * PMD for avoiding occupying CPU for a long time (in case of
+		 * huge page range zapping). When this happens in the
+		 * non-preemptible kernel, the retrying loop in
+		 * kvm_mips_map_page will be running endlessly as there is no
+		 * chance to reschedule back to madvise syscall to run
+		 * __mmu_notifier_invalidate_range_end to decrease
+		 * mmu_notifier_count so that the value of mmu_notifier_count
+		 * is always 1.
+		 * Adding a scheduling point before every retry in
+		 * kvm_mips_map_page will give the madvise syscall (invoked by
+		 * QEMU) a chance to be re-scheduled back to zap pages in the
+		 * given range and clear mmu_notifier_count value to let
+		 * kvm_mips_map_page task jump out the loop.
+		 */
+		cond_resched();
 		goto retry;
 	}
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-09-09  2:49 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-02  9:02 [PATCH] KVM: Fix an issue in non-preemptible kernel Gary Fu
2019-09-04 14:02 ` Paul Burton
2019-09-05 13:54   ` Gary Fu
2019-09-09  2:49 Gary Fu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).