All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/9] Memory scrubbing from idle loop
@ 2017-03-24 17:04 Boris Ostrovsky
  2017-03-24 17:04 ` [PATCH v1 1/9] mm: Separate free page chunk merging into its own routine Boris Ostrovsky
                   ` (8 more replies)
  0 siblings, 9 replies; 33+ messages in thread
From: Boris Ostrovsky @ 2017-03-24 17:04 UTC (permalink / raw)
  To: xen-devel
  Cc: sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
	ian.jackson, tim, jbeulich, Boris Ostrovsky

When a domain is destroyed the hypervisor must scrub domain's pages before
giving them to another guest in order to prevent leaking the deceased
guest's data. Currently this is done during guest's destruction, possibly
causing very lengthy cleanup process.

This series adds support for scrubbing released pages from idle loop,
making guest destruction significantly faster. For example, destroying a
1TB guest can now be completed in 40+ seconds as opposed to about 9 minutes
using existing scrubbing algorithm.

The downside of this series is that we sometimes fail to allocate high-order
sets of pages since dirty pages may not yet be merged into higher-order sets
while they are waiting to be scrubbed.

Briefly, the new algorithm places dirty pages at the end of heap's page list
for each node/zone/order to avoid having to scan full list while searching
for dirty pages. One processor form each node checks whether the node has any
dirty pages and, if such pages are found, scrubs them. Scrubbing itself
happens without holding heap lock so other users may access heap in the
meantime. If while idle loop is scrubbing a particular chunk of pages this
chunk is requested by the heap allocator, scrubbing is immediately stopped.

On the allocation side, alloc_heap_pages() first tries to satisfy allocation
request using only clean pages. If this is not possible, the search is
repeated and dirty pages are scrubbed by the allocator.

This series is somewhat based on earlier work by Bob Liu.

V1:
* Only set PGC_need_scrub bit for the buddy head, thus making it unnecessary
  to scan whole buddy
* Fix spin_lock_cb()
* Scrub CPU-less nodes
* ARM support. Note that I have not been able to test this, only built the
  binary
* Added scrub test patch (last one). Not sure whether it should be considered
  for committing but I have been running with it.

Deferred:
* Per-node heap locks. In addition to (presumably) improving performance in
  general, once they are available we can parallelize scrubbing further by
  allowing more than one core per node to do idle loop scrubbing.
* AVX-based scrubbing
* Use idle loop scrubbing during boot.


Boris Ostrovsky (9):
  mm: Separate free page chunk merging into its own routine
  mm: Place unscrubbed pages at the end of pagelist
  mm: Scrub pages in alloc_heap_pages() if needed
  mm: Scrub memory from idle loop
  mm: Do not discard already-scrubbed pages softirqs are pending
  spinlock: Introduce spin_lock_cb()
  mm: Keep pages available for allocation while scrubbing
  mm: Print number of unscrubbed pages in 'H' debug handler
  mm: Make sure pages are scrubbed

 xen/Kconfig.debug          |    7 +
 xen/arch/arm/domain.c      |   13 +-
 xen/arch/x86/domain.c      |    3 +-
 xen/common/page_alloc.c    |  450 +++++++++++++++++++++++++++++++++++++-------
 xen/common/spinlock.c      |   20 ++
 xen/include/asm-arm/mm.h   |    8 +
 xen/include/asm-x86/mm.h   |    8 +
 xen/include/xen/mm.h       |    1 +
 xen/include/xen/spinlock.h |    3 +
 9 files changed, 439 insertions(+), 74 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2017-03-29 17:12 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-24 17:04 [PATCH v1 0/9] Memory scrubbing from idle loop Boris Ostrovsky
2017-03-24 17:04 ` [PATCH v1 1/9] mm: Separate free page chunk merging into its own routine Boris Ostrovsky
2017-03-27 15:16   ` Wei Liu
2017-03-27 16:03     ` Jan Beulich
2017-03-27 16:28       ` Boris Ostrovsky
2017-03-28  7:27         ` Jan Beulich
2017-03-28 19:20   ` Wei Liu
2017-03-28 19:41     ` Boris Ostrovsky
2017-03-28 19:44       ` Wei Liu
2017-03-28 19:57         ` Boris Ostrovsky
2017-03-24 17:04 ` [PATCH v1 2/9] mm: Place unscrubbed pages at the end of pagelist Boris Ostrovsky
2017-03-28 19:27   ` Wei Liu
2017-03-28 19:46     ` Boris Ostrovsky
2017-03-24 17:04 ` [PATCH v1 3/9] mm: Scrub pages in alloc_heap_pages() if needed Boris Ostrovsky
2017-03-28 19:43   ` Wei Liu
2017-03-24 17:04 ` [PATCH v1 4/9] mm: Scrub memory from idle loop Boris Ostrovsky
2017-03-28 20:01   ` Wei Liu
2017-03-28 20:14     ` Boris Ostrovsky
2017-03-24 17:05 ` [PATCH v1 5/9] mm: Do not discard already-scrubbed pages if softirqs are pending Boris Ostrovsky
2017-03-29 10:22   ` Wei Liu
2017-03-24 17:05 ` [PATCH v1 6/9] spinlock: Introduce spin_lock_cb() Boris Ostrovsky
2017-03-29 10:28   ` Wei Liu
2017-03-29 13:47     ` Boris Ostrovsky
2017-03-29 14:07       ` Boris Ostrovsky
2017-03-24 17:05 ` [PATCH v1 7/9] mm: Keep pages available for allocation while scrubbing Boris Ostrovsky
2017-03-24 17:05 ` [PATCH v1 8/9] mm: Print number of unscrubbed pages in 'H' debug handler Boris Ostrovsky
2017-03-28 20:11   ` Wei Liu
2017-03-24 17:05 ` [PATCH v1 9/9] mm: Make sure pages are scrubbed Boris Ostrovsky
2017-03-29 10:39   ` Wei Liu
2017-03-29 16:25   ` Wei Liu
2017-03-29 16:35     ` Boris Ostrovsky
2017-03-29 16:45       ` Wei Liu
2017-03-29 17:12         ` Andrew Cooper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.