* Re: [PATCH v1 06/12] mm: thp: make split queue lock safe when LRU pages are reparented
@ 2021-08-15 3:18 kernel test robot
0 siblings, 0 replies; 2+ messages in thread
From: kernel test robot @ 2021-08-15 3:18 UTC (permalink / raw)
To: kbuild
[-- Attachment #1: Type: text/plain, Size: 6446 bytes --]
CC: kbuild-all(a)lists.01.org
In-Reply-To: <20210814052519.86679-7-songmuchun@bytedance.com>
References: <20210814052519.86679-7-songmuchun@bytedance.com>
TO: Muchun Song <songmuchun@bytedance.com>
TO: guro(a)fb.com
TO: hannes(a)cmpxchg.org
TO: mhocko(a)kernel.org
TO: akpm(a)linux-foundation.org
TO: shakeelb(a)google.com
TO: vdavydov.dev(a)gmail.com
CC: linux-kernel(a)vger.kernel.org
CC: linux-mm(a)kvack.org
CC: duanxiongchun(a)bytedance.com
CC: fam.zheng(a)bytedance.com
Hi Muchun,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on next-20210813]
[cannot apply to hnaz-linux-mm/master cgroup/for-next linus/master v5.14-rc5 v5.14-rc4 v5.14-rc3 v5.14-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
base: 4b358aabb93a2c654cd1dcab1a25a589f6e2b153
:::::: branch date: 22 hours ago
:::::: commit date: 22 hours ago
config: i386-randconfig-s001-20210815 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce:
# apt-get install sparse
# sparse version: v0.6.3-348-gf0e6938b-dirty
# https://github.com/0day-ci/linux/commit/f19b75eb79975f101227d4b99f0aeda46a378c98
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
git checkout f19b75eb79975f101227d4b99f0aeda46a378c98
# save the attached .config to linux build tree
make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=i386 SHELL=/bin/bash
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
sparse warnings: (new ones prefixed by >>)
mm/huge_memory.c: note: in included file (through include/linux/rbtree.h, include/linux/mm_types.h, include/linux/mmzone.h, ...):
include/linux/rcupdate.h:718:9: sparse: sparse: context imbalance in 'folio_split_queue_lock' - wrong count at exit
>> include/linux/rcupdate.h:718:9: sparse: sparse: context imbalance in 'folio_split_queue_lock_irqsave' - wrong count at exit
mm/huge_memory.c:1634:20: sparse: sparse: context imbalance in 'madvise_free_huge_pmd' - unexpected unlock
mm/huge_memory.c:1671:28: sparse: sparse: context imbalance in 'zap_huge_pmd' - unexpected unlock
mm/huge_memory.c:1778:28: sparse: sparse: context imbalance in 'move_huge_pmd' - unexpected unlock
mm/huge_memory.c:1889:20: sparse: sparse: context imbalance in 'change_huge_pmd' - unexpected unlock
mm/huge_memory.c:1899:12: sparse: sparse: context imbalance in '__pmd_trans_huge_lock' - wrong count at exit
mm/huge_memory.c:2533:29: sparse: sparse: context imbalance in '__split_huge_page' - unexpected unlock
mm/huge_memory.c:2786:17: sparse: sparse: context imbalance in 'split_huge_page_to_list' - different lock contexts for basic block
mm/huge_memory.c:2813:38: sparse: sparse: context imbalance in 'free_transhuge_page' - unexpected unlock
mm/huge_memory.c:2850:38: sparse: sparse: context imbalance in 'deferred_split_huge_page' - unexpected unlock
vim +/folio_split_queue_lock_irqsave +718 include/linux/rcupdate.h
^1da177e4c3f41 Linus Torvalds 2005-04-16 691
^1da177e4c3f41 Linus Torvalds 2005-04-16 692 /*
^1da177e4c3f41 Linus Torvalds 2005-04-16 693 * So where is rcu_write_lock()? It does not exist, as there is no
^1da177e4c3f41 Linus Torvalds 2005-04-16 694 * way for writers to lock out RCU readers. This is a feature, not
^1da177e4c3f41 Linus Torvalds 2005-04-16 695 * a bug -- this property is what provides RCU's performance benefits.
^1da177e4c3f41 Linus Torvalds 2005-04-16 696 * Of course, writers must coordinate with each other. The normal
^1da177e4c3f41 Linus Torvalds 2005-04-16 697 * spinlock primitives work well for this, but any other technique may be
^1da177e4c3f41 Linus Torvalds 2005-04-16 698 * used as well. RCU does not care how the writers keep out of each
^1da177e4c3f41 Linus Torvalds 2005-04-16 699 * others' way, as long as they do so.
^1da177e4c3f41 Linus Torvalds 2005-04-16 700 */
3d76c082907e8f Paul E. McKenney 2009-09-28 701
3d76c082907e8f Paul E. McKenney 2009-09-28 702 /**
ca5ecddfa8fcbd Paul E. McKenney 2010-04-28 703 * rcu_read_unlock() - marks the end of an RCU read-side critical section.
3d76c082907e8f Paul E. McKenney 2009-09-28 704 *
0223846010750e Paul E. McKenney 2021-04-29 705 * In almost all situations, rcu_read_unlock() is immune from deadlock.
0223846010750e Paul E. McKenney 2021-04-29 706 * In recent kernels that have consolidated synchronize_sched() and
0223846010750e Paul E. McKenney 2021-04-29 707 * synchronize_rcu_bh() into synchronize_rcu(), this deadlock immunity
0223846010750e Paul E. McKenney 2021-04-29 708 * also extends to the scheduler's runqueue and priority-inheritance
0223846010750e Paul E. McKenney 2021-04-29 709 * spinlocks, courtesy of the quiescent-state deferral that is carried
0223846010750e Paul E. McKenney 2021-04-29 710 * out when rcu_read_unlock() is invoked with interrupts disabled.
f27bc4873fa8b7 Paul E. McKenney 2014-05-04 711 *
3d76c082907e8f Paul E. McKenney 2009-09-28 712 * See rcu_read_lock() for more information.
3d76c082907e8f Paul E. McKenney 2009-09-28 713 */
bc33f24bdca8b6 Paul E. McKenney 2009-08-22 714 static inline void rcu_read_unlock(void)
bc33f24bdca8b6 Paul E. McKenney 2009-08-22 715 {
f78f5b90c4ffa5 Paul E. McKenney 2015-06-18 716 RCU_LOCKDEP_WARN(!rcu_is_watching(),
bde23c6892878e Heiko Carstens 2012-02-01 717 "rcu_read_unlock() used illegally while idle");
bc33f24bdca8b6 Paul E. McKenney 2009-08-22 @718 __release(RCU);
bc33f24bdca8b6 Paul E. McKenney 2009-08-22 719 __rcu_read_unlock();
d24209bb689e2c Paul E. McKenney 2015-01-21 720 rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */
bc33f24bdca8b6 Paul E. McKenney 2009-08-22 721 }
^1da177e4c3f41 Linus Torvalds 2005-04-16 722
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org
[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 40110 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
* [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages
@ 2021-08-14 5:25 Muchun Song
2021-08-14 5:25 ` [PATCH v1 06/12] mm: thp: make split queue lock safe when LRU pages are reparented Muchun Song
0 siblings, 1 reply; 2+ messages in thread
From: Muchun Song @ 2021-08-14 5:25 UTC (permalink / raw)
To: guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
Cc: linux-kernel, linux-mm, duanxiongchun, fam.zheng, bsingharora,
shy828301, alexs, smuchun, zhengqi.arch, Muchun Song
Hi,
This version is basded on the next-20210811 and drops RFC tag from the last
version. Comments and reviews are welcome. Thanks.
Since the following patchsets applied. All the kernel memory are charged
with the new APIs of obj_cgroup.
[v17,00/19] The new cgroup slab memory controller[1]
[v5,0/7] Use obj_cgroup APIs to charge kmem pages[2]
But user memory allocations (LRU pages) pinning memcgs for a long time -
it exists at a larger scale and is causing recurring problems in the real
world: page cache doesn't get reclaimed for a long time, or is used by the
second, third, fourth, ... instance of the same job that was restarted into
a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
and make page reclaim very inefficient.
We can convert LRU pages and most other raw memcg pins to the objcg direction
to fix this problem, and then the LRU pages will not pin the memcgs.
This patchset aims to make the LRU pages to drop the reference to memory
cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
of the dying cgroups will not increase if we run the following test script.
```bash
#!/bin/bash
cat /proc/cgroups | grep memory
cd /sys/fs/cgroup/memory
for i in range{1..500}
do
mkdir test
echo $$ > test/cgroup.procs
sleep 60 &
echo $$ > cgroup.procs
echo `cat test/cgroup.procs` > cgroup.procs
rmdir test
done
cat /proc/cgroups | grep memory
```
Thanks.
[1] https://lore.kernel.org/linux-mm/20200623015846.1141975-1-guro@fb.com/
[2] https://lore.kernel.org/linux-mm/20210319163821.20704-1-songmuchun@bytedance.com/
Changlogs in v1:
1. Drop RFC tag.
2. Rebase to linux next-20210811.
Changlogs in RFC v4:
1. Collect Acked-by from Roman.
2. Rebase to linux next-20210525.
3. Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
4. Change the patch 1 title to "prepare objcg API for non-kmem usage".
5. Convert reparent_ops_head to an array in patch 8.
Thanks for Roman's review and suggestions.
Changlogs in RFC v3:
1. Drop the code cleanup and simplification patches. Gather those patches
into a separate series[1].
2. Rework patch #1 suggested by Johannes.
Changlogs in RFC v2:
1. Collect Acked-by tags by Johannes. Thanks.
2. Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
3. Fix move_pages_to_lru().
Muchun Song (12):
mm: memcontrol: prepare objcg API for non-kmem usage
mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave
mm: memcontrol: make lruvec lock safe when LRU pages are reparented
mm: vmscan: rework move_pages_to_lru()
mm: thp: introduce folio_split_queue_lock{_irqsave}()
mm: thp: make split queue lock safe when LRU pages are reparented
mm: memcontrol: make all the callers of {folio,page}_memcg() safe
mm: memcontrol: introduce memcg_reparent_ops
mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg()
mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function
mm: lru: use lruvec lock to serialize memcg changes
Documentation/admin-guide/cgroup-v1/memory.rst | 2 +-
fs/buffer.c | 11 +-
fs/fs-writeback.c | 23 +-
include/linux/memcontrol.h | 184 ++++----
include/linux/mm.h | 1 +
include/linux/mm_inline.h | 15 +-
mm/compaction.c | 39 +-
mm/filemap.c | 2 +-
mm/huge_memory.c | 162 ++++++--
mm/memcontrol.c | 554 ++++++++++++++++++-------
mm/migrate.c | 4 +
mm/page-writeback.c | 6 +-
mm/page_io.c | 5 +-
mm/rmap.c | 14 +-
mm/swap.c | 49 +--
mm/vmscan.c | 57 ++-
16 files changed, 778 insertions(+), 350 deletions(-)
--
2.11.0
^ permalink raw reply [flat|nested] 2+ messages in thread
* [PATCH v1 06/12] mm: thp: make split queue lock safe when LRU pages are reparented
2021-08-14 5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
@ 2021-08-14 5:25 ` Muchun Song
0 siblings, 0 replies; 2+ messages in thread
From: Muchun Song @ 2021-08-14 5:25 UTC (permalink / raw)
To: guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
Cc: linux-kernel, linux-mm, duanxiongchun, fam.zheng, bsingharora,
shy828301, alexs, smuchun, zhengqi.arch, Muchun Song
Similar to the lruvec lock, we use the same approach to make the split
queue lock safe when LRU pages are reparented.
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
mm/huge_memory.c | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c49ef28e48c1..22fbf2c74d49 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -535,9 +535,22 @@ static struct deferred_split *folio_split_queue_lock(struct folio *folio)
{
struct deferred_split *queue;
+ rcu_read_lock();
+retry:
queue = folio_split_queue(folio);
spin_lock(&queue->split_queue_lock);
+ if (unlikely(split_queue_memcg(queue) != folio_memcg(folio))) {
+ spin_unlock(&queue->split_queue_lock);
+ goto retry;
+ }
+
+ /*
+ * Preemption is disabled in the internal of spin_lock, which can serve
+ * as RCU read-side critical sections.
+ */
+ rcu_read_unlock();
+
return queue;
}
@@ -546,9 +559,19 @@ folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags)
{
struct deferred_split *queue;
+ rcu_read_lock();
+retry:
queue = folio_split_queue(folio);
spin_lock_irqsave(&queue->split_queue_lock, *flags);
+ if (unlikely(split_queue_memcg(queue) != folio_memcg(folio))) {
+ spin_unlock_irqrestore(&queue->split_queue_lock, *flags);
+ goto retry;
+ }
+
+ /* See the comments in folio_split_queue_lock(). */
+ rcu_read_unlock();
+
return queue;
}
--
2.11.0
^ permalink raw reply related [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-08-15 3:18 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-15 3:18 [PATCH v1 06/12] mm: thp: make split queue lock safe when LRU pages are reparented kernel test robot
-- strict thread matches above, loose matches on Subject: below --
2021-08-14 5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
2021-08-14 5:25 ` [PATCH v1 06/12] mm: thp: make split queue lock safe when LRU pages are reparented Muchun Song
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.