All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: akpm@linux-foundation.org, chuck.lever@oracle.com, hch@lst.de,
	jack@suse.cz, linux-mm@kvack.org, mhocko@suse.com,
	mm-commits@vger.kernel.org, neilb@suse.de,
	torvalds@linux-foundation.org, trond.myklebust@hammerspace.com
Subject: [patch 052/128] mm/writeback: replace PF_LESS_THROTTLE with PF_LOCAL_THROTTLE
Date: Mon, 01 Jun 2020 21:48:18 -0700	[thread overview]
Message-ID: <20200602044818.joMs-wr5s%akpm@linux-foundation.org> (raw)
In-Reply-To: <20200601214457.919c35648e96a2b46b573fe1@linux-foundation.org>

From: NeilBrown <neilb@suse.de>
Subject: mm/writeback: replace PF_LESS_THROTTLE with PF_LOCAL_THROTTLE

PF_LESS_THROTTLE exists for loop-back nfsd (and a similar need in the loop
block driver and callers of prctl(PR_SET_IO_FLUSHER)), where a daemon
needs to write to one bdi (the final bdi) in order to free up writes
queued to another bdi (the client bdi).

The daemon sets PF_LESS_THROTTLE and gets a larger allowance of dirty
pages, so that it can still dirty pages after other processses have been
throttled.  The purpose of this is to avoid deadlock that happen when the
PF_LESS_THROTTLE process must write for any dirty pages to be freed, but
it is being thottled and cannot write.

This approach was designed when all threads were blocked equally,
independently on which device they were writing to, or how fast it was. 
Since that time the writeback algorithm has changed substantially with
different threads getting different allowances based on non-trivial
heuristics.  This means the simple "add 25%" heuristic is no longer
reliable.

The important issue is not that the daemon needs a *larger* dirty page
allowance, but that it needs a *private* dirty page allowance, so that
dirty pages for the "client" bdi that it is helping to clear (the bdi for
an NFS filesystem or loop block device etc) do not affect the throttling
of the daemon writing to the "final" bdi.

This patch changes the heuristic so that the task is not throttled when
the bdi it is writing to has a dirty page count below below (or equal to)
the free-run threshold for that bdi.  This ensures it will always be able
to have some pages in flight, and so will not deadlock.

In a steady-state, it is expected that PF_LOCAL_THROTTLE tasks might still
be throttled by global threshold, but that is acceptable as it is only the
deadlock state that is interesting for this flag.

This approach of "only throttle when target bdi is busy" is consistent
with the other use of PF_LESS_THROTTLE in current_may_throttle(), were it
causes attention to be focussed only on the target bdi.

So this patch
 - renames PF_LESS_THROTTLE to PF_LOCAL_THROTTLE,
 - removes the 25% bonus that that flag gives, and
 - If PF_LOCAL_THROTTLE is set, don't delay at all unless the
   global and the local free-run thresholds are exceeded.

Note that previously realtime threads were treated the same as
PF_LESS_THROTTLE threads.  This patch does *not* change the behvaiour for
real-time threads, so it is now different from the behaviour of nfsd and
loop tasks.  I don't know what is wanted for realtime.

[akpm@linux-foundation.org: coding style fixes]
Link: http://lkml.kernel.org/r/87ftbf7gs3.fsf@notabene.neil.brown.name
Signed-off-by: NeilBrown <neilb@suse.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Chuck Lever <chuck.lever@oracle.com>	[nfsd]
Cc: Christoph Hellwig <hch@lst.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Trond Myklebust <trond.myklebust@hammerspace.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 drivers/block/loop.c  |    2 -
 fs/nfsd/vfs.c         |    9 ++++----
 include/linux/sched.h |    3 +-
 kernel/sys.c          |    2 -
 mm/page-writeback.c   |   41 ++++++++++++++++++++++++++++++++--------
 mm/vmscan.c           |    4 +--
 6 files changed, 44 insertions(+), 17 deletions(-)

--- a/drivers/block/loop.c~mm-replace-pf_less_throttle-with-pf_local_throttle
+++ a/drivers/block/loop.c
@@ -919,7 +919,7 @@ static void loop_unprepare_queue(struct
 
 static int loop_kthread_worker_fn(void *worker_ptr)
 {
-	current->flags |= PF_LESS_THROTTLE | PF_MEMALLOC_NOIO;
+	current->flags |= PF_LOCAL_THROTTLE | PF_MEMALLOC_NOIO;
 	return kthread_worker_fn(worker_ptr);
 }
 
--- a/fs/nfsd/vfs.c~mm-replace-pf_less_throttle-with-pf_local_throttle
+++ a/fs/nfsd/vfs.c
@@ -979,12 +979,13 @@ nfsd_vfs_write(struct svc_rqst *rqstp, s
 
 	if (test_bit(RQ_LOCAL, &rqstp->rq_flags))
 		/*
-		 * We want less throttling in balance_dirty_pages()
-		 * and shrink_inactive_list() so that nfs to
+		 * We want throttling in balance_dirty_pages()
+		 * and shrink_inactive_list() to only consider
+		 * the backingdev we are writing to, so that nfs to
 		 * localhost doesn't cause nfsd to lock up due to all
 		 * the client's dirty pages or its congested queue.
 		 */
-		current->flags |= PF_LESS_THROTTLE;
+		current->flags |= PF_LOCAL_THROTTLE;
 
 	exp = fhp->fh_export;
 	use_wgather = (rqstp->rq_vers == 2) && EX_WGATHER(exp);
@@ -1037,7 +1038,7 @@ out_nfserr:
 		nfserr = nfserrno(host_err);
 	}
 	if (test_bit(RQ_LOCAL, &rqstp->rq_flags))
-		current_restore_flags(pflags, PF_LESS_THROTTLE);
+		current_restore_flags(pflags, PF_LOCAL_THROTTLE);
 	return nfserr;
 }
 
--- a/include/linux/sched.h~mm-replace-pf_less_throttle-with-pf_local_throttle
+++ a/include/linux/sched.h
@@ -1481,7 +1481,8 @@ extern struct pid *cad_pid;
 #define PF_KSWAPD		0x00020000	/* I am kswapd */
 #define PF_MEMALLOC_NOFS	0x00040000	/* All allocation requests will inherit GFP_NOFS */
 #define PF_MEMALLOC_NOIO	0x00080000	/* All allocation requests will inherit GFP_NOIO */
-#define PF_LESS_THROTTLE	0x00100000	/* Throttle me less: I clean memory */
+#define PF_LOCAL_THROTTLE	0x00100000	/* Throttle writes only against the bdi I write to,
+						 * I am cleaning dirty pages from some other bdi. */
 #define PF_KTHREAD		0x00200000	/* I am a kernel thread */
 #define PF_RANDOMIZE		0x00400000	/* Randomize virtual address space */
 #define PF_SWAPWRITE		0x00800000	/* Allowed to write to swap */
--- a/kernel/sys.c~mm-replace-pf_less_throttle-with-pf_local_throttle
+++ a/kernel/sys.c
@@ -2262,7 +2262,7 @@ int __weak arch_prctl_spec_ctrl_set(stru
 	return -EINVAL;
 }
 
-#define PR_IO_FLUSHER (PF_MEMALLOC_NOIO | PF_LESS_THROTTLE)
+#define PR_IO_FLUSHER (PF_MEMALLOC_NOIO | PF_LOCAL_THROTTLE)
 
 SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
 		unsigned long, arg4, unsigned long, arg5)
--- a/mm/page-writeback.c~mm-replace-pf_less_throttle-with-pf_local_throttle
+++ a/mm/page-writeback.c
@@ -387,8 +387,7 @@ static unsigned long global_dirtyable_me
  * Calculate @dtc->thresh and ->bg_thresh considering
  * vm_dirty_{bytes|ratio} and dirty_background_{bytes|ratio}.  The caller
  * must ensure that @dtc->avail is set before calling this function.  The
- * dirty limits will be lifted by 1/4 for PF_LESS_THROTTLE (ie. nfsd) and
- * real-time tasks.
+ * dirty limits will be lifted by 1/4 for real-time tasks.
  */
 static void domain_dirty_limits(struct dirty_throttle_control *dtc)
 {
@@ -436,7 +435,7 @@ static void domain_dirty_limits(struct d
 	if (bg_thresh >= thresh)
 		bg_thresh = thresh / 2;
 	tsk = current;
-	if (tsk->flags & PF_LESS_THROTTLE || rt_task(tsk)) {
+	if (rt_task(tsk)) {
 		bg_thresh += bg_thresh / 4 + global_wb_domain.dirty_limit / 32;
 		thresh += thresh / 4 + global_wb_domain.dirty_limit / 32;
 	}
@@ -486,7 +485,7 @@ static unsigned long node_dirty_limit(st
 	else
 		dirty = vm_dirty_ratio * node_memory / 100;
 
-	if (tsk->flags & PF_LESS_THROTTLE || rt_task(tsk))
+	if (rt_task(tsk))
 		dirty += dirty / 4;
 
 	return dirty;
@@ -1653,8 +1652,12 @@ static void balance_dirty_pages(struct b
 		if (dirty <= dirty_freerun_ceiling(thresh, bg_thresh) &&
 		    (!mdtc ||
 		     m_dirty <= dirty_freerun_ceiling(m_thresh, m_bg_thresh))) {
-			unsigned long intv = dirty_poll_interval(dirty, thresh);
-			unsigned long m_intv = ULONG_MAX;
+			unsigned long intv;
+			unsigned long m_intv;
+
+free_running:
+			intv = dirty_poll_interval(dirty, thresh);
+			m_intv = ULONG_MAX;
 
 			current->dirty_paused_when = now;
 			current->nr_dirtied = 0;
@@ -1673,9 +1676,20 @@ static void balance_dirty_pages(struct b
 		 * Calculate global domain's pos_ratio and select the
 		 * global dtc by default.
 		 */
-		if (!strictlimit)
+		if (!strictlimit) {
 			wb_dirty_limits(gdtc);
 
+			if ((current->flags & PF_LOCAL_THROTTLE) &&
+			    gdtc->wb_dirty <
+			    dirty_freerun_ceiling(gdtc->wb_thresh,
+						  gdtc->wb_bg_thresh))
+				/*
+				 * LOCAL_THROTTLE tasks must not be throttled
+				 * when below the per-wb freerun ceiling.
+				 */
+				goto free_running;
+		}
+
 		dirty_exceeded = (gdtc->wb_dirty > gdtc->wb_thresh) &&
 			((gdtc->dirty > gdtc->thresh) || strictlimit);
 
@@ -1689,9 +1703,20 @@ static void balance_dirty_pages(struct b
 			 * both global and memcg domains.  Choose the one
 			 * w/ lower pos_ratio.
 			 */
-			if (!strictlimit)
+			if (!strictlimit) {
 				wb_dirty_limits(mdtc);
 
+				if ((current->flags & PF_LOCAL_THROTTLE) &&
+				    mdtc->wb_dirty <
+				    dirty_freerun_ceiling(mdtc->wb_thresh,
+							  mdtc->wb_bg_thresh))
+					/*
+					 * LOCAL_THROTTLE tasks must not be
+					 * throttled when below the per-wb
+					 * freerun ceiling.
+					 */
+					goto free_running;
+			}
 			dirty_exceeded |= (mdtc->wb_dirty > mdtc->wb_thresh) &&
 				((mdtc->dirty > mdtc->thresh) || strictlimit);
 
--- a/mm/vmscan.c~mm-replace-pf_less_throttle-with-pf_local_throttle
+++ a/mm/vmscan.c
@@ -1878,13 +1878,13 @@ static unsigned noinline_for_stack move_
 
 /*
  * If a kernel thread (such as nfsd for loop-back mounts) services
- * a backing device by writing to the page cache it sets PF_LESS_THROTTLE.
+ * a backing device by writing to the page cache it sets PF_LOCAL_THROTTLE.
  * In that case we should only throttle if the backing device it is
  * writing to is congested.  In other cases it is safe to throttle.
  */
 static int current_may_throttle(void)
 {
-	return !(current->flags & PF_LESS_THROTTLE) ||
+	return !(current->flags & PF_LOCAL_THROTTLE) ||
 		current->backing_dev_info == NULL ||
 		bdi_write_congested(current->backing_dev_info);
 }
_

  parent reply	other threads:[~2020-06-02  4:48 UTC|newest]

Thread overview: 162+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-02  4:44 incoming Andrew Morton
2020-06-02  4:45 ` [patch 001/128] squashfs: migrate from ll_rw_block usage to BIO Andrew Morton
2020-06-02  4:45   ` Andrew Morton
2020-06-02  4:45 ` [patch 002/128] ocfs2: add missing annotation for dlm_empty_lockres() Andrew Morton
2020-06-02  4:45 ` [patch 003/128] ocfs2: mount shared volume without ha stack Andrew Morton
2020-06-02  4:45 ` [patch 004/128] arch/parisc/include/asm/pgtable.h: remove unused `old_pte' Andrew Morton
2020-06-02  4:45 ` [patch 005/128] vfs: track per-sb writeback errors and report them to syncfs Andrew Morton
2020-06-02  4:45 ` [patch 006/128] fs/buffer.c: record blockdev write errors in super_block that it backs Andrew Morton
2020-06-02  4:45 ` [patch 007/128] usercopy: mark dma-kmalloc caches as usercopy caches Andrew Morton
2020-06-02  4:45 ` [patch 008/128] mm/slub.c: fix corrupted freechain in deactivate_slab() Andrew Morton
2020-06-02  4:45 ` [patch 009/128] slub: Remove userspace notifier for cache add/remove Andrew Morton
2020-06-02  4:45 ` [patch 010/128] slub: remove kmalloc under list_lock from list_slab_objects() V2 Andrew Morton
2020-06-02  4:45 ` [patch 011/128] mm/slub: fix stack overruns with SLUB_STATS Andrew Morton
2020-06-02  4:46 ` [patch 012/128] Documentation/vm/slub.rst: s/Toggle/Enable/ Andrew Morton
2020-06-02 13:10   ` Rafael Aquini
2020-06-02  4:46 ` [patch 013/128] mm, dump_page(): do not crash with invalid mapping pointer Andrew Morton
2020-06-02  4:46 ` [patch 014/128] mm: move readahead prototypes from mm.h Andrew Morton
2020-06-02  4:46 ` [patch 015/128] mm: return void from various readahead functions Andrew Morton
2020-06-02  4:46 ` [patch 016/128] mm: ignore return value of ->readpages Andrew Morton
2020-06-02  4:46   ` Andrew Morton
2020-06-02  4:46 ` [patch 017/128] mm: move readahead nr_pages check into read_pages Andrew Morton
2020-06-02  4:46 ` [patch 018/128] mm: add new readahead_control API Andrew Morton
2020-06-02  4:46 ` [patch 019/128] mm: use readahead_control to pass arguments Andrew Morton
2020-06-02  4:46 ` [patch 020/128] mm: rename various 'offset' parameters to 'index' Andrew Morton
2020-06-02  4:46 ` [patch 021/128] mm: rename readahead loop variable to 'i' Andrew Morton
2020-06-02  4:46 ` [patch 022/128] mm: remove 'page_offset' from readahead loop Andrew Morton
2020-06-02  4:46   ` Andrew Morton
2020-06-02  4:46 ` [patch 023/128] mm: put readahead pages in cache earlier Andrew Morton
2020-06-02  4:46 ` [patch 024/128] mm: add readahead address space operation Andrew Morton
2020-06-02  4:46 ` [patch 025/128] mm: move end_index check out of readahead loop Andrew Morton
2020-06-02  4:46 ` [patch 026/128] mm: add page_cache_readahead_unbounded Andrew Morton
2020-06-02  4:46 ` [patch 027/128] mm: document why we don't set PageReadahead Andrew Morton
2020-06-02  4:46 ` [patch 028/128] mm: use memalloc_nofs_save in readahead path Andrew Morton
2020-06-02  4:47 ` [patch 029/128] fs: convert mpage_readpages to mpage_readahead Andrew Morton
2020-06-02  4:47 ` [patch 030/128] btrfs: convert from readpages to readahead Andrew Morton
2020-06-02  4:47 ` [patch 031/128] erofs: convert uncompressed files " Andrew Morton
2020-06-02  4:47 ` [patch 032/128] erofs: convert compressed " Andrew Morton
2020-06-02  4:47   ` Andrew Morton
2020-06-02  4:47 ` [patch 033/128] ext4: convert " Andrew Morton
2020-06-02  4:47 ` [patch 034/128] ext4: pass the inode to ext4_mpage_readpages Andrew Morton
2020-06-02  4:47 ` [patch 035/128] f2fs: convert from readpages to readahead Andrew Morton
2020-06-02  4:47 ` [patch 036/128] f2fs: pass the inode to f2fs_mpage_readpages Andrew Morton
2020-06-02  4:47 ` [patch 037/128] fuse: convert from readpages to readahead Andrew Morton
2020-06-02  4:47 ` [patch 038/128] iomap: " Andrew Morton
2020-06-02  4:47 ` [patch 039/128] include/linux/pagemap.h: introduce attach/detach_page_private Andrew Morton
2020-06-02  4:47 ` [patch 040/128] md: remove __clear_page_buffers and use attach/detach_page_private Andrew Morton
2020-06-02  4:47 ` [patch 041/128] btrfs: " Andrew Morton
2020-06-02 14:19   ` David Sterba
2020-06-02  4:47 ` [patch 042/128] fs/buffer.c: " Andrew Morton
2020-06-02  4:47 ` [patch 043/128] f2fs: " Andrew Morton
2020-06-02  4:47   ` Andrew Morton
2020-06-02  4:47 ` [patch 044/128] iomap: " Andrew Morton
2020-06-02 16:23   ` Darrick J. Wong
2020-06-02  4:47 ` [patch 045/128] ntfs: replace attach_page_buffers with attach_page_private Andrew Morton
2020-06-02  4:48 ` [patch 046/128] orangefs: use attach/detach_page_private Andrew Morton
2020-06-02  4:48 ` [patch 047/128] buffer_head.h: remove attach_page_buffers Andrew Morton
2020-06-02  4:48   ` Andrew Morton
2020-06-02  4:48 ` [patch 048/128] mm/migrate.c: call detach_page_private to cleanup code Andrew Morton
2020-06-02  4:48   ` Andrew Morton
2020-06-02  4:48 ` [patch 049/128] mm_types.h: change set_page_private to inline function Andrew Morton
2020-06-02  4:48 ` [patch 050/128] mm/filemap.c: remove misleading comment Andrew Morton
2020-06-02  4:48 ` [patch 051/128] mm/page-writeback.c: remove unused variable Andrew Morton
2020-06-02  4:48 ` Andrew Morton [this message]
2020-06-02  4:48 ` [patch 053/128] mm/writeback: discard NR_UNSTABLE_NFS, use NR_WRITEBACK instead Andrew Morton
2020-06-02  4:48 ` [patch 054/128] mm/gup.c: update the documentation Andrew Morton
2020-06-02  4:48 ` [patch 055/128] mm/gup: introduce pin_user_pages_unlocked Andrew Morton
2020-06-02  4:48 ` [patch 056/128] ivtv: convert get_user_pages() --> pin_user_pages() Andrew Morton
2020-06-02  4:48 ` [patch 057/128] mm/gup.c: further document vma_permits_fault() Andrew Morton
2020-06-02  4:48 ` [patch 058/128] mm/swapfile: use list_{prev,next}_entry() instead of open-coding Andrew Morton
2020-06-02  4:48 ` [patch 059/128] mm/swap_state: fix a data race in swapin_nr_pages Andrew Morton
2020-06-02  4:48 ` [patch 060/128] mm: swap: properly update readahead statistics in unuse_pte_range() Andrew Morton
2020-06-02  4:48 ` [patch 061/128] mm/swapfile.c: offset is only used when there is more slots Andrew Morton
2020-06-02  4:48 ` [patch 062/128] mm/swapfile.c: explicitly show ssd/non-ssd is handled mutually exclusive Andrew Morton
2020-06-02  4:48   ` Andrew Morton
2020-06-02  4:48 ` [patch 063/128] mm/swapfile.c: remove the unnecessary goto for SSD case Andrew Morton
2020-06-02  4:48 ` [patch 064/128] mm/swapfile.c: simplify the calculation of n_goal Andrew Morton
2020-06-02  4:48   ` Andrew Morton
2020-06-02  4:48 ` [patch 065/128] mm/swapfile.c: remove the extra check in scan_swap_map_slots() Andrew Morton
2020-06-02  4:48   ` Andrew Morton
2020-06-02  4:49 ` [patch 066/128] mm/swapfile.c: found_free could be represented by (tmp < max) Andrew Morton
2020-06-02  4:49 ` [patch 067/128] mm/swapfile.c: tmp is always smaller than max Andrew Morton
2020-06-02  4:49 ` [patch 068/128] mm/swapfile.c: omit a duplicate code by compare tmp and max first Andrew Morton
2020-06-02  4:49 ` [patch 069/128] swap: try to scan more free slots even when fragmented Andrew Morton
2020-06-02  4:49 ` [patch 070/128] mm/swapfile.c: classify SWAP_MAP_XXX to make it more readable Andrew Morton
2020-06-02  4:49 ` [patch 071/128] mm/swapfile.c: __swap_entry_free() always free 1 entry Andrew Morton
2020-06-02  4:49 ` [patch 072/128] mm/swapfile.c: use prandom_u32_max() Andrew Morton
2020-06-02  4:49 ` [patch 073/128] swap: reduce lock contention on swap cache from swap slots allocation Andrew Morton
2020-06-02  4:49 ` [patch 074/128] mm: swapfile: fix /proc/swaps heading and Size/Used/Priority alignment Andrew Morton
2020-06-02  4:49 ` [patch 075/128] include/linux/swap.h: delete meaningless __add_to_swap_cache() declaration Andrew Morton
2020-06-02  4:49 ` [patch 076/128] mm, memcg: add workingset_restore in memory.stat Andrew Morton
2020-06-02  4:49 ` [patch 077/128] mm: memcontrol: simplify value comparison between count and limit Andrew Morton
2020-06-02  4:49 ` [patch 078/128] memcg: expose root cgroup's memory.stat Andrew Morton
2020-06-02  4:49 ` [patch 079/128] mm/memcg: prepare for swap over-high accounting and penalty calculation Andrew Morton
2020-06-02  4:49 ` [patch 080/128] mm/memcg: move penalty delay clamping out of calculate_high_delay() Andrew Morton
2020-06-02  4:49   ` Andrew Morton
2020-06-02  4:49 ` [patch 081/128] mm/memcg: move cgroup high memory limit setting into struct page_counter Andrew Morton
2020-06-02  4:49 ` [patch 082/128] mm/memcg: automatically penalize tasks with high swap use Andrew Morton
2020-06-02  4:49 ` [patch 083/128] memcg: fix memcg_kmem_bypass() for remote memcg charging Andrew Morton
2020-06-02  4:49 ` [patch 084/128] x86: mm: ptdump: calculate effective permissions correctly Andrew Morton
2020-06-02  4:50 ` [patch 085/128] mm: ptdump: expand type of 'val' in note_page() Andrew Morton
2020-06-02  4:50 ` [patch 086/128] /proc/PID/smaps: Add PMD migration entry parsing Andrew Morton
2020-06-02  4:50 ` [patch 087/128] mm/memory: remove unnecessary pte_devmap case in copy_one_pte() Andrew Morton
2020-06-02  4:50 ` [patch 088/128] mm, memory_failure: don't send BUS_MCEERR_AO for action required error Andrew Morton
2020-06-02  4:50 ` [patch 089/128] x86/hyperv: use vmalloc_exec for the hypercall page Andrew Morton
2020-06-02  4:50 ` [patch 090/128] x86: fix vmap arguments in map_irq_stack Andrew Morton
2020-06-02  4:50 ` [patch 091/128] staging: android: ion: use vmap instead of vm_map_ram Andrew Morton
2020-06-02  4:50 ` [patch 092/128] staging: media: ipu3: use vmap instead of reimplementing it Andrew Morton
2020-06-02  4:50   ` Andrew Morton
2020-06-02  4:50 ` [patch 093/128] dma-mapping: use vmap insted " Andrew Morton
2020-06-02  4:50   ` Andrew Morton
2020-06-02  4:50 ` [patch 094/128] powerpc: add an ioremap_phb helper Andrew Morton
2020-06-02  4:50 ` [patch 095/128] powerpc: remove __ioremap_at and __iounmap_at Andrew Morton
2020-06-02  4:50   ` Andrew Morton
2020-06-02  4:50 ` [patch 096/128] mm: remove __get_vm_area Andrew Morton
2020-06-02  4:50   ` Andrew Morton
2020-06-02  4:50 ` [patch 097/128] mm: unexport unmap_kernel_range_noflush Andrew Morton
2020-06-02  4:50 ` [patch 098/128] mm: rename CONFIG_PGTABLE_MAPPING to CONFIG_ZSMALLOC_PGTABLE_MAPPING Andrew Morton
2020-06-02  4:50 ` [patch 099/128] mm: only allow page table mappings for built-in zsmalloc Andrew Morton
2020-06-02  4:51 ` [patch 100/128] mm: pass addr as unsigned long to vb_free Andrew Morton
2020-06-02  4:51 ` [patch 101/128] mm: remove vmap_page_range_noflush and vunmap_page_range Andrew Morton
2020-06-02  4:51   ` Andrew Morton
2020-06-02  4:51 ` [patch 102/128] mm: rename vmap_page_range to map_kernel_range Andrew Morton
2020-06-02  4:51 ` [patch 103/128] mm: don't return the number of pages from map_kernel_range{,_noflush} Andrew Morton
2020-06-02  4:51 ` [patch 104/128] mm: remove map_vm_range Andrew Morton
2020-06-02  4:51 ` [patch 105/128] mm: remove unmap_vmap_area Andrew Morton
2020-06-02  4:51   ` Andrew Morton
2020-06-02  4:51 ` [patch 106/128] mm: remove the prot argument from vm_map_ram Andrew Morton
2020-06-02  4:51 ` [patch 107/128] mm: enforce that vmap can't map pages executable Andrew Morton
2020-06-02  4:51 ` [patch 108/128] gpu/drm: remove the powerpc hack in drm_legacy_sg_alloc Andrew Morton
2020-06-02  4:51   ` Andrew Morton
2020-06-02  4:51 ` [patch 109/128] mm: remove the pgprot argument to __vmalloc Andrew Morton
2020-06-02  4:51 ` [patch 110/128] mm: remove the prot argument to __vmalloc_node Andrew Morton
2020-06-02  4:51 ` [patch 111/128] mm: remove both instances of __vmalloc_node_flags Andrew Morton
2020-06-02  4:51 ` [patch 112/128] mm: remove __vmalloc_node_flags_caller Andrew Morton
2020-06-02  4:51   ` Andrew Morton
2020-06-02  4:51 ` [patch 113/128] mm: switch the test_vmalloc module to use __vmalloc_node Andrew Morton
2020-06-02  4:51   ` Andrew Morton
2020-06-02  4:52 ` [patch 114/128] mm: remove vmalloc_user_node_flags Andrew Morton
2020-06-02  4:52   ` Andrew Morton
2020-06-02  4:52 ` [patch 115/128] arm64: use __vmalloc_node in arch_alloc_vmap_stack Andrew Morton
2020-06-02  4:52 ` [patch 116/128] powerpc: use __vmalloc_node in alloc_vm_stack Andrew Morton
2020-06-02  4:52 ` [patch 117/128] s390: use __vmalloc_node in stack_alloc Andrew Morton
2020-06-02  4:52 ` [patch 118/128] mm: add functions to track page directory modifications Andrew Morton
2020-06-02  4:52 ` [patch 119/128] mm/vmalloc: track which page-table levels were modified Andrew Morton
2020-06-02  4:52 ` [patch 120/128] mm/ioremap: " Andrew Morton
2020-06-02  4:52 ` [patch 121/128] x86/mm/64: implement arch_sync_kernel_mappings() Andrew Morton
2020-06-02  4:52 ` [patch 122/128] x86/mm/32: " Andrew Morton
2020-06-02  4:52 ` [patch 123/128] mm: remove vmalloc_sync_(un)mappings() Andrew Morton
2020-06-02  4:52   ` Andrew Morton
2020-06-02  4:52 ` [patch 124/128] x86/mm: remove vmalloc faulting Andrew Morton
2020-06-02  4:52   ` Andrew Morton
2020-06-02  4:52 ` [patch 125/128] kasan: fix clang compilation warning due to stack protector Andrew Morton
2020-06-02  4:52 ` [patch 126/128] ubsan: entirely disable alignment checks under UBSAN_TRAP Andrew Morton
2020-06-02  4:52 ` [patch 127/128] mm/mm_init.c: report kasan-tag information stored in page->flags Andrew Morton
2020-06-02  4:52 ` [patch 128/128] kasan: move kasan_report() into report.c Andrew Morton
2020-06-02 19:05 ` + mm-slub-fix-a-memory-leak-in-sysfs_slab_add.patch added to -mm tree Andrew Morton
2020-06-02 19:14 ` [obsolete] sh-drop-config_mtd_m25p80-in-sh7757lcr_defconfig.patch removed from " Andrew Morton
2020-06-02 20:08 ` incoming Andrew Morton
2020-06-02 20:45   ` incoming Linus Torvalds
2020-06-02 21:38     ` incoming Andrew Morton
2020-06-02 22:18       ` incoming Linus Torvalds
2020-06-02 20:09 incoming Andrew Morton
2020-06-02 20:13 ` [patch 052/128] mm/writeback: replace PF_LESS_THROTTLE with PF_LOCAL_THROTTLE Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200602044818.joMs-wr5s%akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=chuck.lever@oracle.com \
    --cc=hch@lst.de \
    --cc=jack@suse.cz \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mm-commits@vger.kernel.org \
    --cc=neilb@suse.de \
    --cc=torvalds@linux-foundation.org \
    --cc=trond.myklebust@hammerspace.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.