All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: akpm@linux-foundation.org, david@fromorbit.com, guro@fb.com,
	hannes@cmpxchg.org, linux-mm@kvack.org,
	mm-commits@vger.kernel.org, tj@kernel.org,
	torvalds@linux-foundation.org
Subject: [patch 01/87] vfs: keep inodes with page cache off the inode shrinker LRU
Date: Mon, 08 Nov 2021 18:31:24 -0800	[thread overview]
Message-ID: <20211109023124.-zo_v1Nio%akpm@linux-foundation.org> (raw)
In-Reply-To: <20211108183057.809e428e841088b657a975ec@linux-foundation.org>

From: Johannes Weiner <hannes@cmpxchg.org>
Subject: vfs: keep inodes with page cache off the inode shrinker LRU

Historically (pre-2.5), the inode shrinker used to reclaim only empty
inodes and skip over those that still contained page cache.  This caused
problems on highmem hosts: struct inode could put fill lowmem zones before
the cache was getting reclaimed in the highmem zones.

To address this, the inode shrinker started to strip page cache to
facilitate reclaiming lowmem.  However, this comes with its own set of
problems: the shrinkers may drop actively used page cache just because the
inodes are not currently open or dirty - think working with a large git
tree.  It further doesn't respect cgroup memory protection settings and
can cause priority inversions between containers.

Nowadays, the page cache also holds non-resident info for evicted cache
pages in order to detect refaults.  We've come to rely heavily on this
data inside reclaim for protecting the cache workingset and driving swap
behavior.  We also use it to quantify and report workload health through
psi.  The latter in turn is used for fleet health monitoring, as well as
driving automated memory sizing of workloads and containers, proactive
reclaim and memory offloading schemes.

The consequences of dropping page cache prematurely is that we're seeing
subtle and not-so-subtle failures in all of the above-mentioned scenarios,
with the workload generally entering unexpected thrashing states while
losing the ability to reliably detect it.

To fix this on non-highmem systems at least, going back to rotating inodes
on the LRU isn't feasible.  We've tried (commit a76cf1a474d7 ("mm: don't
reclaim inodes with many attached pages")) and failed (commit 69056ee6a8a3
("Revert "mm: don't reclaim inodes with many attached pages"")).  The
issue is mostly that shrinker pools attract pressure based on their size,
and when objects get skipped the shrinkers remember this as deferred
reclaim work.  This accumulates excessive pressure on the remaining
inodes, and we can quickly eat into heavily used ones, or dirty ones that
require IO to reclaim, when there potentially is plenty of cold, clean
cache around still.

Instead, this patch keeps populated inodes off the inode LRU in the first
place - just like an open file or dirty state would.  An otherwise clean
and unused inode then gets queued when the last cache entry disappears. 
This solves the problem without reintroducing the reclaim issues, and
generally is a bit more scalable than having to wade through potentially
hundreds of thousands of busy inodes.

Locking is a bit tricky because the locks protecting the inode state
(i_lock) and the inode LRU (lru_list.lock) don't nest inside the irq-safe
page cache lock (i_pages.xa_lock).  Page cache deletions are serialized
through i_lock, taken before the i_pages lock, to make sure depopulated
inodes are queued reliably.  Additions may race with deletions, but we'll
check again in the shrinker.  If additions race with the shrinker itself,
we're protected by the i_lock: if find_inode() or iput() win, the shrinker
will bail on the elevated i_count or I_REFERENCED; if the shrinker wins
and goes ahead with the inode, it will set I_FREEING and inhibit further
igets(), which will cause the other side to create a new instance of the
inode instead.

Link: https://lkml.kernel.org/r/20210614211904.14420-4-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Roman Gushchin <guro@fb.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Dave Chinner <david@fromorbit.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 fs/inode.c              |   46 ++++++++++++++++++++--------------
 fs/internal.h           |    1 
 include/linux/fs.h      |    1 
 include/linux/pagemap.h |   50 ++++++++++++++++++++++++++++++++++++++
 mm/filemap.c            |    8 ++++++
 mm/truncate.c           |   19 ++++++++++++--
 mm/vmscan.c             |    7 +++++
 mm/workingset.c         |   10 +++++++
 8 files changed, 120 insertions(+), 22 deletions(-)

--- a/fs/inode.c~vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-lru
+++ a/fs/inode.c
@@ -428,11 +428,20 @@ void ihold(struct inode *inode)
 }
 EXPORT_SYMBOL(ihold);
 
-static void inode_lru_list_add(struct inode *inode)
+static void __inode_add_lru(struct inode *inode, bool rotate)
 {
+	if (inode->i_state & (I_DIRTY_ALL | I_SYNC | I_FREEING | I_WILL_FREE))
+		return;
+	if (atomic_read(&inode->i_count))
+		return;
+	if (!(inode->i_sb->s_flags & SB_ACTIVE))
+		return;
+	if (!mapping_shrinkable(&inode->i_data))
+		return;
+
 	if (list_lru_add(&inode->i_sb->s_inode_lru, &inode->i_lru))
 		this_cpu_inc(nr_unused);
-	else
+	else if (rotate)
 		inode->i_state |= I_REFERENCED;
 }
 
@@ -443,16 +452,11 @@ static void inode_lru_list_add(struct in
  */
 void inode_add_lru(struct inode *inode)
 {
-	if (!(inode->i_state & (I_DIRTY_ALL | I_SYNC |
-				I_FREEING | I_WILL_FREE)) &&
-	    !atomic_read(&inode->i_count) && inode->i_sb->s_flags & SB_ACTIVE)
-		inode_lru_list_add(inode);
+	__inode_add_lru(inode, false);
 }
 
-
 static void inode_lru_list_del(struct inode *inode)
 {
-
 	if (list_lru_del(&inode->i_sb->s_inode_lru, &inode->i_lru))
 		this_cpu_dec(nr_unused);
 }
@@ -728,10 +732,6 @@ again:
 /*
  * Isolate the inode from the LRU in preparation for freeing it.
  *
- * Any inodes which are pinned purely because of attached pagecache have their
- * pagecache removed.  If the inode has metadata buffers attached to
- * mapping->private_list then try to remove them.
- *
  * If the inode has the I_REFERENCED flag set, then it means that it has been
  * used recently - the flag is set in iput_final(). When we encounter such an
  * inode, clear the flag and move it to the back of the LRU so it gets another
@@ -747,31 +747,39 @@ static enum lru_status inode_lru_isolate
 	struct inode	*inode = container_of(item, struct inode, i_lru);
 
 	/*
-	 * we are inverting the lru lock/inode->i_lock here, so use a trylock.
-	 * If we fail to get the lock, just skip it.
+	 * We are inverting the lru lock/inode->i_lock here, so use a
+	 * trylock. If we fail to get the lock, just skip it.
 	 */
 	if (!spin_trylock(&inode->i_lock))
 		return LRU_SKIP;
 
 	/*
-	 * Referenced or dirty inodes are still in use. Give them another pass
-	 * through the LRU as we canot reclaim them now.
+	 * Inodes can get referenced, redirtied, or repopulated while
+	 * they're already on the LRU, and this can make them
+	 * unreclaimable for a while. Remove them lazily here; iput,
+	 * sync, or the last page cache deletion will requeue them.
 	 */
 	if (atomic_read(&inode->i_count) ||
-	    (inode->i_state & ~I_REFERENCED)) {
+	    (inode->i_state & ~I_REFERENCED) ||
+	    !mapping_shrinkable(&inode->i_data)) {
 		list_lru_isolate(lru, &inode->i_lru);
 		spin_unlock(&inode->i_lock);
 		this_cpu_dec(nr_unused);
 		return LRU_REMOVED;
 	}
 
-	/* recently referenced inodes get one more pass */
+	/* Recently referenced inodes get one more pass */
 	if (inode->i_state & I_REFERENCED) {
 		inode->i_state &= ~I_REFERENCED;
 		spin_unlock(&inode->i_lock);
 		return LRU_ROTATE;
 	}
 
+	/*
+	 * On highmem systems, mapping_shrinkable() permits dropping
+	 * page cache in order to free up struct inodes: lowmem might
+	 * be under pressure before the cache inside the highmem zone.
+	 */
 	if (inode_has_buffers(inode) || !mapping_empty(&inode->i_data)) {
 		__iget(inode);
 		spin_unlock(&inode->i_lock);
@@ -1638,7 +1646,7 @@ static void iput_final(struct inode *ino
 	if (!drop &&
 	    !(inode->i_state & I_DONTCACHE) &&
 	    (sb->s_flags & SB_ACTIVE)) {
-		inode_add_lru(inode);
+		__inode_add_lru(inode, true);
 		spin_unlock(&inode->i_lock);
 		return;
 	}
--- a/fs/internal.h~vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-lru
+++ a/fs/internal.h
@@ -149,7 +149,6 @@ extern int vfs_open(const struct path *,
  * inode.c
  */
 extern long prune_icache_sb(struct super_block *sb, struct shrink_control *sc);
-extern void inode_add_lru(struct inode *inode);
 extern int dentry_needs_remove_privs(struct dentry *dentry);
 
 /*
--- a/include/linux/fs.h~vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-lru
+++ a/include/linux/fs.h
@@ -3193,6 +3193,7 @@ static inline void remove_inode_hash(str
 }
 
 extern void inode_sb_list_add(struct inode *inode);
+extern void inode_add_lru(struct inode *inode);
 
 extern int sb_set_blocksize(struct super_block *, int);
 extern int sb_min_blocksize(struct super_block *, int);
--- a/include/linux/pagemap.h~vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-lru
+++ a/include/linux/pagemap.h
@@ -24,6 +24,56 @@ static inline bool mapping_empty(struct
 }
 
 /*
+ * mapping_shrinkable - test if page cache state allows inode reclaim
+ * @mapping: the page cache mapping
+ *
+ * This checks the mapping's cache state for the pupose of inode
+ * reclaim and LRU management.
+ *
+ * The caller is expected to hold the i_lock, but is not required to
+ * hold the i_pages lock, which usually protects cache state. That's
+ * because the i_lock and the list_lru lock that protect the inode and
+ * its LRU state don't nest inside the irq-safe i_pages lock.
+ *
+ * Cache deletions are performed under the i_lock, which ensures that
+ * when an inode goes empty, it will reliably get queued on the LRU.
+ *
+ * Cache additions do not acquire the i_lock and may race with this
+ * check, in which case we'll report the inode as shrinkable when it
+ * has cache pages. This is okay: the shrinker also checks the
+ * refcount and the referenced bit, which will be elevated or set in
+ * the process of adding new cache pages to an inode.
+ */
+static inline bool mapping_shrinkable(struct address_space *mapping)
+{
+	void *head;
+
+	/*
+	 * On highmem systems, there could be lowmem pressure from the
+	 * inodes before there is highmem pressure from the page
+	 * cache. Make inodes shrinkable regardless of cache state.
+	 */
+	if (IS_ENABLED(CONFIG_HIGHMEM))
+		return true;
+
+	/* Cache completely empty? Shrink away. */
+	head = rcu_access_pointer(mapping->i_pages.xa_head);
+	if (!head)
+		return true;
+
+	/*
+	 * The xarray stores single offset-0 entries directly in the
+	 * head pointer, which allows non-resident page cache entries
+	 * to escape the shadow shrinker's list of xarray nodes. The
+	 * inode shrinker needs to pick them up under memory pressure.
+	 */
+	if (!xa_is_node(head) && xa_is_value(head))
+		return true;
+
+	return false;
+}
+
+/*
  * Bits in mapping->flags.
  */
 enum mapping_flags {
--- a/mm/filemap.c~vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-lru
+++ a/mm/filemap.c
@@ -262,9 +262,13 @@ void delete_from_page_cache(struct page
 	struct address_space *mapping = page_mapping(page);
 
 	BUG_ON(!PageLocked(page));
+	spin_lock(&mapping->host->i_lock);
 	xa_lock_irq(&mapping->i_pages);
 	__delete_from_page_cache(page, NULL);
 	xa_unlock_irq(&mapping->i_pages);
+	if (mapping_shrinkable(mapping))
+		inode_add_lru(mapping->host);
+	spin_unlock(&mapping->host->i_lock);
 
 	page_cache_free_page(mapping, page);
 }
@@ -340,6 +344,7 @@ void delete_from_page_cache_batch(struct
 	if (!pagevec_count(pvec))
 		return;
 
+	spin_lock(&mapping->host->i_lock);
 	xa_lock_irq(&mapping->i_pages);
 	for (i = 0; i < pagevec_count(pvec); i++) {
 		trace_mm_filemap_delete_from_page_cache(pvec->pages[i]);
@@ -348,6 +353,9 @@ void delete_from_page_cache_batch(struct
 	}
 	page_cache_delete_batch(mapping, pvec);
 	xa_unlock_irq(&mapping->i_pages);
+	if (mapping_shrinkable(mapping))
+		inode_add_lru(mapping->host);
+	spin_unlock(&mapping->host->i_lock);
 
 	for (i = 0; i < pagevec_count(pvec); i++)
 		page_cache_free_page(mapping, pvec->pages[i]);
--- a/mm/truncate.c~vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-lru
+++ a/mm/truncate.c
@@ -45,9 +45,13 @@ static inline void __clear_shadow_entry(
 static void clear_shadow_entry(struct address_space *mapping, pgoff_t index,
 			       void *entry)
 {
+	spin_lock(&mapping->host->i_lock);
 	xa_lock_irq(&mapping->i_pages);
 	__clear_shadow_entry(mapping, index, entry);
 	xa_unlock_irq(&mapping->i_pages);
+	if (mapping_shrinkable(mapping))
+		inode_add_lru(mapping->host);
+	spin_unlock(&mapping->host->i_lock);
 }
 
 /*
@@ -73,8 +77,10 @@ static void truncate_exceptional_pvec_en
 		return;
 
 	dax = dax_mapping(mapping);
-	if (!dax)
+	if (!dax) {
+		spin_lock(&mapping->host->i_lock);
 		xa_lock_irq(&mapping->i_pages);
+	}
 
 	for (i = j; i < pagevec_count(pvec); i++) {
 		struct page *page = pvec->pages[i];
@@ -93,8 +99,12 @@ static void truncate_exceptional_pvec_en
 		__clear_shadow_entry(mapping, index, page);
 	}
 
-	if (!dax)
+	if (!dax) {
 		xa_unlock_irq(&mapping->i_pages);
+		if (mapping_shrinkable(mapping))
+			inode_add_lru(mapping->host);
+		spin_unlock(&mapping->host->i_lock);
+	}
 	pvec->nr = j;
 }
 
@@ -567,6 +577,7 @@ invalidate_complete_page2(struct address
 	if (page_has_private(page) && !try_to_release_page(page, GFP_KERNEL))
 		return 0;
 
+	spin_lock(&mapping->host->i_lock);
 	xa_lock_irq(&mapping->i_pages);
 	if (PageDirty(page))
 		goto failed;
@@ -574,6 +585,9 @@ invalidate_complete_page2(struct address
 	BUG_ON(page_has_private(page));
 	__delete_from_page_cache(page, NULL);
 	xa_unlock_irq(&mapping->i_pages);
+	if (mapping_shrinkable(mapping))
+		inode_add_lru(mapping->host);
+	spin_unlock(&mapping->host->i_lock);
 
 	if (mapping->a_ops->freepage)
 		mapping->a_ops->freepage(page);
@@ -582,6 +596,7 @@ invalidate_complete_page2(struct address
 	return 1;
 failed:
 	xa_unlock_irq(&mapping->i_pages);
+	spin_unlock(&mapping->host->i_lock);
 	return 0;
 }
 
--- a/mm/vmscan.c~vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-lru
+++ a/mm/vmscan.c
@@ -1190,6 +1190,8 @@ static int __remove_mapping(struct addre
 	BUG_ON(!PageLocked(page));
 	BUG_ON(mapping != page_mapping(page));
 
+	if (!PageSwapCache(page))
+		spin_lock(&mapping->host->i_lock);
 	xa_lock_irq(&mapping->i_pages);
 	/*
 	 * The non racy check for a busy page.
@@ -1258,6 +1260,9 @@ static int __remove_mapping(struct addre
 			shadow = workingset_eviction(page, target_memcg);
 		__delete_from_page_cache(page, shadow);
 		xa_unlock_irq(&mapping->i_pages);
+		if (mapping_shrinkable(mapping))
+			inode_add_lru(mapping->host);
+		spin_unlock(&mapping->host->i_lock);
 
 		if (freepage != NULL)
 			freepage(page);
@@ -1267,6 +1272,8 @@ static int __remove_mapping(struct addre
 
 cannot_free:
 	xa_unlock_irq(&mapping->i_pages);
+	if (!PageSwapCache(page))
+		spin_unlock(&mapping->host->i_lock);
 	return 0;
 }
 
--- a/mm/workingset.c~vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-lru
+++ a/mm/workingset.c
@@ -543,6 +543,13 @@ static enum lru_status shadow_lru_isolat
 		goto out;
 	}
 
+	if (!spin_trylock(&mapping->host->i_lock)) {
+		xa_unlock(&mapping->i_pages);
+		spin_unlock_irq(lru_lock);
+		ret = LRU_RETRY;
+		goto out;
+	}
+
 	list_lru_isolate(lru, item);
 	__dec_lruvec_kmem_state(node, WORKINGSET_NODES);
 
@@ -562,6 +569,9 @@ static enum lru_status shadow_lru_isolat
 
 out_invalid:
 	xa_unlock_irq(&mapping->i_pages);
+	if (mapping_shrinkable(mapping))
+		inode_add_lru(mapping->host);
+	spin_unlock(&mapping->host->i_lock);
 	ret = LRU_REMOVED_RETRY;
 out:
 	cond_resched();
_

  reply	other threads:[~2021-11-09  2:31 UTC|newest]

Thread overview: 98+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-09  2:30 incoming Andrew Morton
2021-11-09  2:31 ` Andrew Morton [this message]
2021-11-09  2:31 ` [patch 02/87] mm,hugetlb: remove mlock ulimit for SHM_HUGETLB Andrew Morton
2021-11-09  2:31 ` [patch 03/87] procfs: do not list TID 0 in /proc/<pid>/task Andrew Morton
2021-11-09  2:31 ` [patch 04/87] x86/xen: update xen_oldmem_pfn_is_ram() documentation Andrew Morton
2021-11-09  2:31 ` [patch 05/87] x86/xen: simplify xen_oldmem_pfn_is_ram() Andrew Morton
2021-11-09  2:31 ` [patch 06/87] x86/xen: print a warning when HVMOP_get_mem_type fails Andrew Morton
2021-11-09  2:31 ` [patch 07/87] proc/vmcore: let pfn_is_ram() return a bool Andrew Morton
2021-11-09  2:31 ` [patch 08/87] proc/vmcore: convert oldmem_pfn_is_ram callback to more generic vmcore callbacks Andrew Morton
2021-11-09  3:59   ` Dave Young
2021-11-09  6:40     ` David Hildenbrand
2021-11-09 10:30       ` Dave Young
2021-11-10  7:22   ` Baoquan He
2021-11-10  8:10     ` David Hildenbrand
2021-11-10 11:11       ` Dave Young
2021-11-10 11:21         ` David Hildenbrand
2021-11-10 11:28           ` Dave Young
2021-11-10 12:05             ` David Hildenbrand
2021-11-10 13:11               ` Dave Young
2021-11-09  2:31 ` [patch 09/87] virtio-mem: factor out hotplug specifics from virtio_mem_init() into virtio_mem_init_hotplug() Andrew Morton
2021-11-09  2:31 ` [patch 10/87] virtio-mem: factor out hotplug specifics from virtio_mem_probe() " Andrew Morton
2021-11-09  2:31 ` [patch 11/87] virtio-mem: factor out hotplug specifics from virtio_mem_remove() into virtio_mem_deinit_hotplug() Andrew Morton
2021-11-09  2:32 ` [patch 12/87] virtio-mem: kdump mode to sanitize /proc/vmcore access Andrew Morton
2021-11-09  2:32 ` [patch 13/87] proc: allow pid_revalidate() during LOOKUP_RCU Andrew Morton
2021-11-09  2:32 ` [patch 14/87] kernel.h: drop unneeded <linux/kernel.h> inclusion from other headers Andrew Morton
2021-11-09  2:32 ` [patch 15/87] kernel.h: split out container_of() and typeof_member() macros Andrew Morton
2021-11-09  2:32 ` [patch 16/87] include/kunit/test.h: replace kernel.h with the necessary inclusions Andrew Morton
2021-11-09  2:32 ` [patch 17/87] include/linux/list.h: " Andrew Morton
2021-11-09  2:32 ` [patch 18/87] include/linux/llist.h: " Andrew Morton
2021-11-09  2:32 ` [patch 19/87] include/linux/plist.h: " Andrew Morton
2021-11-09  2:32 ` [patch 20/87] include/media/media-entity.h: " Andrew Morton
2021-11-09  2:32 ` [patch 21/87] include/linux/delay.h: " Andrew Morton
2021-11-09  2:32 ` [patch 22/87] include/linux/sbitmap.h: " Andrew Morton
2021-11-09  2:32 ` [patch 23/87] include/linux/radix-tree.h: " Andrew Morton
2021-11-09  2:32 ` [patch 24/87] include/linux/generic-radix-tree.h: " Andrew Morton
2021-11-09  2:32 ` [patch 25/87] kernel.h: split out instruction pointer accessors Andrew Morton
2021-11-09  2:32 ` [patch 26/87] linux/container_of.h: switch to static_assert Andrew Morton
2021-11-09  2:32 ` [patch 27/87] mailmap: update email address for Colin King Andrew Morton
2021-11-09  2:32 ` [patch 28/87] MAINTAINERS: add "exec & binfmt" section with myself and Eric Andrew Morton
2021-11-09  2:32 ` [patch 29/87] MAINTAINERS: rectify entry for ARM/TOSHIBA VISCONTI ARCHITECTURE Andrew Morton
2021-11-09  2:32 ` [patch 30/87] MAINTAINERS: rectify entry for HIKEY960 ONBOARD USB GPIO HUB DRIVER Andrew Morton
2021-11-09  2:33 ` [patch 31/87] MAINTAINERS: rectify entry for INTEL KEEM BAY DRM DRIVER Andrew Morton
2021-11-09  2:33 ` [patch 32/87] MAINTAINERS: rectify entry for ALLWINNER HARDWARE SPINLOCK SUPPORT Andrew Morton
2021-11-09  2:33 ` [patch 33/87] lib, stackdepot: check stackdepot handle before accessing slabs Andrew Morton
2021-11-09  2:33 ` [patch 34/87] lib, stackdepot: add helper to print stack entries Andrew Morton
2021-11-09  2:33 ` [patch 35/87] lib, stackdepot: add helper to print stack entries into buffer Andrew Morton
2021-11-09  2:33 ` [patch 36/87] include/linux/string_helpers.h: add linux/string.h for strlen() Andrew Morton
2021-11-09  2:33 ` [patch 37/87] lib: uninline simple_strntoull() as well Andrew Morton
2021-11-09  2:33 ` [patch 38/87] mm/scatterlist: replace the !preemptible warning in sg_miter_stop() Andrew Morton
2021-11-09  2:33 ` [patch 39/87] const_structs.checkpatch: add a few sound ops structs Andrew Morton
2021-11-09  2:33 ` [patch 40/87] checkpatch: improve EXPORT_SYMBOL test for EXPORT_SYMBOL_NS uses Andrew Morton
2021-11-09  2:33 ` [patch 41/87] checkpatch: get default codespell dictionary path from package location Andrew Morton
2021-11-09  2:33 ` [patch 42/87] binfmt_elf: reintroduce using MAP_FIXED_NOREPLACE Andrew Morton
2021-11-09  2:33 ` [patch 43/87] ELF: simplify STACK_ALLOC macro Andrew Morton
2021-11-09  2:33 ` [patch 44/87] kallsyms: remove arch specific text and data check Andrew Morton
2021-11-09  2:33 ` [patch 45/87] kallsyms: fix address-checks for kernel related range Andrew Morton
2021-11-09  2:33 ` [patch 46/87] sections: move and rename core_kernel_data() to is_kernel_core_data() Andrew Morton
2021-11-09  2:33 ` [patch 47/87] sections: move is_kernel_inittext() into sections.h Andrew Morton
2021-11-09  2:33 ` [patch 48/87] x86: mm: rename __is_kernel_text() to is_x86_32_kernel_text() Andrew Morton
2021-11-09  2:34 ` [patch 49/87] sections: provide internal __is_kernel() and __is_kernel_text() helper Andrew Morton
2021-11-09  2:34 ` [patch 50/87] mm: kasan: use is_kernel() helper Andrew Morton
2021-11-09  2:34 ` [patch 51/87] extable: use is_kernel_text() helper Andrew Morton
2021-11-09  2:34 ` [patch 52/87] powerpc/mm: use core_kernel_text() helper Andrew Morton
2021-11-09  2:34 ` [patch 53/87] microblaze: use is_kernel_text() helper Andrew Morton
2021-11-09  2:34 ` [patch 54/87] alpha: " Andrew Morton
2021-11-09  2:34 ` [patch 55/87] ramfs: fix mount source show for ramfs Andrew Morton
2021-11-09  2:34 ` [patch 56/87] init: make unknown command line param message clearer Andrew Morton
2021-11-09  2:34 ` [patch 57/87] coda: avoid NULL pointer dereference from a bad inode Andrew Morton
2021-11-09  2:34 ` [patch 58/87] coda: check for async upcall request using local state Andrew Morton
2021-11-09  2:34 ` [patch 59/87] coda: remove err which no one care Andrew Morton
2021-11-09  2:34 ` [patch 60/87] coda: avoid flagging NULL inodes Andrew Morton
2021-11-09  2:34 ` [patch 61/87] coda: avoid hidden code duplication in rename Andrew Morton
2021-11-09  2:34 ` [patch 62/87] coda: avoid doing bad things on inode type changes during revalidation Andrew Morton
2021-11-09  2:34 ` [patch 63/87] coda: convert from atomic_t to refcount_t on coda_vm_ops->refcnt Andrew Morton
2021-11-09  2:34 ` [patch 64/87] coda: use vmemdup_user to replace the open code Andrew Morton
2021-11-09  2:34 ` [patch 65/87] coda: bump module version to 7.2 Andrew Morton
2021-11-09  2:34 ` [patch 66/87] nilfs2: replace snprintf in show functions with sysfs_emit Andrew Morton
2021-11-09  2:35 ` [patch 67/87] nilfs2: remove filenames from file comments Andrew Morton
2021-11-09  2:35 ` [patch 68/87] hfs/hfsplus: use WARN_ON for sanity check Andrew Morton
2021-11-09  2:35 ` [patch 69/87] crash_dump: fix boolreturn.cocci warning Andrew Morton
2021-11-09  2:35 ` [patch 70/87] crash_dump: remove duplicate include in crash_dump.h Andrew Morton
2021-11-09  2:35 ` [patch 71/87] signal: remove duplicate include in signal.h Andrew Morton
2021-11-09  2:35 ` [patch 72/87] seq_file: move seq_escape() to a header Andrew Morton
2021-11-09  2:35 ` [patch 73/87] seq_file: fix passing wrong private data Andrew Morton
2021-11-09  2:35 ` [patch 74/87] kernel/fork.c: unshare(): use swap() to make code cleaner Andrew Morton
2021-11-09  2:35 ` [patch 75/87] sysv: use BUILD_BUG_ON instead of runtime check Andrew Morton
2021-11-09  2:35 ` [patch 76/87] Documentation/kcov: include types.h in the example Andrew Morton
2021-11-09  2:35 ` [patch 77/87] Documentation/kcov: define `ip' " Andrew Morton
2021-11-09  2:35 ` [patch 78/87] kcov: allocate per-CPU memory on the relevant node Andrew Morton
2021-11-09  2:35 ` [patch 79/87] kcov: avoid enable+disable interrupts if !in_task() Andrew Morton
2021-11-09  2:35 ` [patch 80/87] kcov: replace local_irq_save() with a local_lock_t Andrew Morton
2021-11-09  2:35 ` [patch 81/87] scripts/gdb: handle split debug for vmlinux Andrew Morton
2021-11-09  2:35 ` [patch 82/87] kernel/resource: clean up and optimize iomem_is_exclusive() Andrew Morton
2021-11-09  2:35 ` [patch 83/87] kernel/resource: disallow access to exclusive system RAM regions Andrew Morton
2021-11-09  2:35 ` [patch 84/87] virtio-mem: disallow mapping virtio-mem memory via /dev/mem Andrew Morton
2021-11-09  2:35 ` [patch 85/87] selftests/kselftest/runner/run_one(): allow running non-executable files Andrew Morton
2021-11-09  2:35 ` [patch 86/87] ipc: check checkpoint_restore_ns_capable() to modify C/R proc files Andrew Morton
2021-11-09  2:36 ` [patch 87/87] ipc/ipc_sysctl.c: remove fallback for !CONFIG_PROC_SYSCTL Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211109023124.-zo_v1Nio%akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=david@fromorbit.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mm-commits@vger.kernel.org \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.