All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org,
	jglisse@google.com, vbabka@suse.cz, hannes@cmpxchg.org,
	mgorman@techsingularity.net, dave@stgolabs.net,
	willy@infradead.org, liam.howlett@oracle.com,
	peterz@infradead.org, ldufour@linux.ibm.com,
	laurent.dufour@fr.ibm.com, paulmck@kernel.org, luto@kernel.org,
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com,
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de,
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com,
	lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, linux-mm@kvack.org,
	linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, kernel-team@android.com
Subject: Re: [PATCH 12/41] mm: add per-VMA lock and helper functions to control it
Date: Tue, 17 Jan 2023 16:04:25 +0100	[thread overview]
Message-ID: <Y8a4+bV1dYNAiUkD@dhcp22.suse.cz> (raw)
In-Reply-To: <20230109205336.3665937-13-surenb@google.com>

On Mon 09-01-23 12:53:07, Suren Baghdasaryan wrote:
> Introduce a per-VMA rw_semaphore to be used during page fault handling
> instead of mmap_lock. Because there are cases when multiple VMAs need
> to be exclusively locked during VMA tree modifications, instead of the
> usual lock/unlock patter we mark a VMA as locked by taking per-VMA lock
> exclusively and setting vma->lock_seq to the current mm->lock_seq. When
> mmap_write_lock holder is done with all modifications and drops mmap_lock,
> it will increment mm->lock_seq, effectively unlocking all VMAs marked as
> locked.

I have to say I was struggling a bit with the above and only understood
what you mean by reading the patch several times. I would phrase it like
this (feel free to use if you consider this to be an improvement).

Introduce a per-VMA rw_semaphore. The lock implementation relies on a
per-vma and per-mm sequence counters to note exclusive locking:
        - read lock - (implemented by vma_read_trylock) requires the the
          vma (vm_lock_seq) and mm (mm_lock_seq) sequence counters to
          differ. If they match then there must be a vma exclusive lock
          held somewhere.
        - read unlock - (implemented by vma_read_unlock) is a trivial
          vma->lock unlock.
        - write lock - (vma_write_lock) requires the mmap_lock to be
          held exclusively and the current mm counter is noted to the vma
          side. This will allow multiple vmas to be locked under a single
          mmap_lock write lock (e.g. during vma merging). The vma counter
          is modified under exclusive vma lock.
        - write unlock - (vma_write_unlock_mm) is a batch release of all
          vma locks held. It doesn't pair with a specific
          vma_write_lock! It is done before exclusive mmap_lock is
          released by incrementing mm sequence counter (mm_lock_seq).
	- write downgrade - if the mmap_lock is downgraded to the read
	  lock all vma write locks are released as well (effectivelly
	  same as write unlock).

> VMA lock is placed on the cache line boundary so that its 'count' field
> falls into the first cache line while the rest of the fields fall into
> the second cache line. This lets the 'count' field to be cached with
> other frequently accessed fields and used quickly in uncontended case
> while 'owner' and other fields used in the contended case will not
> invalidate the first cache line while waiting on the lock.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> ---
>  include/linux/mm.h        | 80 +++++++++++++++++++++++++++++++++++++++
>  include/linux/mm_types.h  |  8 ++++
>  include/linux/mmap_lock.h | 13 +++++++
>  kernel/fork.c             |  4 ++
>  mm/init-mm.c              |  3 ++
>  5 files changed, 108 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index f3f196e4d66d..ec2c4c227d51 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -612,6 +612,85 @@ struct vm_operations_struct {
>  					  unsigned long addr);
>  };
>  
> +#ifdef CONFIG_PER_VMA_LOCK
> +static inline void vma_init_lock(struct vm_area_struct *vma)
> +{
> +	init_rwsem(&vma->lock);
> +	vma->vm_lock_seq = -1;
> +}
> +
> +static inline void vma_write_lock(struct vm_area_struct *vma)
> +{
> +	int mm_lock_seq;
> +
> +	mmap_assert_write_locked(vma->vm_mm);
> +
> +	/*
> +	 * current task is holding mmap_write_lock, both vma->vm_lock_seq and
> +	 * mm->mm_lock_seq can't be concurrently modified.
> +	 */
> +	mm_lock_seq = READ_ONCE(vma->vm_mm->mm_lock_seq);
> +	if (vma->vm_lock_seq == mm_lock_seq)
> +		return;
> +
> +	down_write(&vma->lock);
> +	vma->vm_lock_seq = mm_lock_seq;
> +	up_write(&vma->lock);
> +}
> +
> +/*
> + * Try to read-lock a vma. The function is allowed to occasionally yield false
> + * locked result to avoid performance overhead, in which case we fall back to
> + * using mmap_lock. The function should never yield false unlocked result.
> + */
> +static inline bool vma_read_trylock(struct vm_area_struct *vma)
> +{
> +	/* Check before locking. A race might cause false locked result. */
> +	if (vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))
> +		return false;
> +
> +	if (unlikely(down_read_trylock(&vma->lock) == 0))
> +		return false;
> +
> +	/*
> +	 * Overflow might produce false locked result.
> +	 * False unlocked result is impossible because we modify and check
> +	 * vma->vm_lock_seq under vma->lock protection and mm->mm_lock_seq
> +	 * modification invalidates all existing locks.
> +	 */
> +	if (unlikely(vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))) {
> +		up_read(&vma->lock);
> +		return false;
> +	}
> +	return true;
> +}
> +
> +static inline void vma_read_unlock(struct vm_area_struct *vma)
> +{
> +	up_read(&vma->lock);
> +}
> +
> +static inline void vma_assert_write_locked(struct vm_area_struct *vma)
> +{
> +	mmap_assert_write_locked(vma->vm_mm);
> +	/*
> +	 * current task is holding mmap_write_lock, both vma->vm_lock_seq and
> +	 * mm->mm_lock_seq can't be concurrently modified.
> +	 */
> +	VM_BUG_ON_VMA(vma->vm_lock_seq != READ_ONCE(vma->vm_mm->mm_lock_seq), vma);
> +}
> +
> +#else /* CONFIG_PER_VMA_LOCK */
> +
> +static inline void vma_init_lock(struct vm_area_struct *vma) {}
> +static inline void vma_write_lock(struct vm_area_struct *vma) {}
> +static inline bool vma_read_trylock(struct vm_area_struct *vma)
> +		{ return false; }
> +static inline void vma_read_unlock(struct vm_area_struct *vma) {}
> +static inline void vma_assert_write_locked(struct vm_area_struct *vma) {}
> +
> +#endif /* CONFIG_PER_VMA_LOCK */
> +
>  static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
>  {
>  	static const struct vm_operations_struct dummy_vm_ops = {};
> @@ -620,6 +699,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
>  	vma->vm_mm = mm;
>  	vma->vm_ops = &dummy_vm_ops;
>  	INIT_LIST_HEAD(&vma->anon_vma_chain);
> +	vma_init_lock(vma);
>  }
>  
>  static inline void vma_set_anonymous(struct vm_area_struct *vma)
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index d5cdec1314fe..5f7c5ca89931 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -555,6 +555,11 @@ struct vm_area_struct {
>  	pgprot_t vm_page_prot;
>  	unsigned long vm_flags;		/* Flags, see mm.h. */
>  
> +#ifdef CONFIG_PER_VMA_LOCK
> +	int vm_lock_seq;
> +	struct rw_semaphore lock;
> +#endif
> +
>  	/*
>  	 * For areas with an address space and backing store,
>  	 * linkage into the address_space->i_mmap interval tree.
> @@ -680,6 +685,9 @@ struct mm_struct {
>  					  * init_mm.mmlist, and are protected
>  					  * by mmlist_lock
>  					  */
> +#ifdef CONFIG_PER_VMA_LOCK
> +		int mm_lock_seq;
> +#endif
>  
>  
>  		unsigned long hiwater_rss; /* High-watermark of RSS usage */
> diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
> index e49ba91bb1f0..40facd4c398b 100644
> --- a/include/linux/mmap_lock.h
> +++ b/include/linux/mmap_lock.h
> @@ -72,6 +72,17 @@ static inline void mmap_assert_write_locked(struct mm_struct *mm)
>  	VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm);
>  }
>  
> +#ifdef CONFIG_PER_VMA_LOCK
> +static inline void vma_write_unlock_mm(struct mm_struct *mm)
> +{
> +	mmap_assert_write_locked(mm);
> +	/* No races during update due to exclusive mmap_lock being held */
> +	WRITE_ONCE(mm->mm_lock_seq, mm->mm_lock_seq + 1);
> +}
> +#else
> +static inline void vma_write_unlock_mm(struct mm_struct *mm) {}
> +#endif
> +
>  static inline void mmap_init_lock(struct mm_struct *mm)
>  {
>  	init_rwsem(&mm->mmap_lock);
> @@ -114,12 +125,14 @@ static inline bool mmap_write_trylock(struct mm_struct *mm)
>  static inline void mmap_write_unlock(struct mm_struct *mm)
>  {
>  	__mmap_lock_trace_released(mm, true);
> +	vma_write_unlock_mm(mm);
>  	up_write(&mm->mmap_lock);
>  }
>  
>  static inline void mmap_write_downgrade(struct mm_struct *mm)
>  {
>  	__mmap_lock_trace_acquire_returned(mm, false, true);
> +	vma_write_unlock_mm(mm);
>  	downgrade_write(&mm->mmap_lock);
>  }
>  
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 5986817f393c..c026d75108b3 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -474,6 +474,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
>  		 */
>  		*new = data_race(*orig);
>  		INIT_LIST_HEAD(&new->anon_vma_chain);
> +		vma_init_lock(new);
>  		dup_anon_vma_name(orig, new);
>  	}
>  	return new;
> @@ -1145,6 +1146,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
>  	seqcount_init(&mm->write_protect_seq);
>  	mmap_init_lock(mm);
>  	INIT_LIST_HEAD(&mm->mmlist);
> +#ifdef CONFIG_PER_VMA_LOCK
> +	WRITE_ONCE(mm->mm_lock_seq, 0);
> +#endif
>  	mm_pgtables_bytes_init(mm);
>  	mm->map_count = 0;
>  	mm->locked_vm = 0;
> diff --git a/mm/init-mm.c b/mm/init-mm.c
> index c9327abb771c..33269314e060 100644
> --- a/mm/init-mm.c
> +++ b/mm/init-mm.c
> @@ -37,6 +37,9 @@ struct mm_struct init_mm = {
>  	.page_table_lock =  __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock),
>  	.arg_lock	=  __SPIN_LOCK_UNLOCKED(init_mm.arg_lock),
>  	.mmlist		= LIST_HEAD_INIT(init_mm.mmlist),
> +#ifdef CONFIG_PER_VMA_LOCK
> +	.mm_lock_seq	= 0,
> +#endif
>  	.user_ns	= &init_user_ns,
>  	.cpu_bitmap	= CPU_BITS_NONE,
>  #ifdef CONFIG_IOMMU_SVA
> -- 
> 2.39.0

-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@suse.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: michel@lespinasse.org, joelaf@google.com, songliubraving@fb.com,
	leewalsh@google.com, david@redhat.com, peterz@infradead.org,
	bigeasy@linutronix.de, peterx@redhat.com, dhowells@redhat.com,
	linux-mm@kvack.org, edumazet@google.com, jglisse@google.com,
	punit.agrawal@bytedance.com, arjunroy@google.com,
	dave@stgolabs.net, minchan@google.com, x86@kernel.org,
	hughd@google.com, willy@infradead.org, gurua@google.com,
	laurent.dufour@fr.ibm.com, linux-arm-kernel@lists.infradead.org,
	rientjes@google.com, axelrasmussen@google.com,
	kernel-team@android.com, soheil@google.com, paulmck@kernel.org,
	jannh@google.com, liam.howlett@oracle.com, shakeelb@google.com,
	luto@kernel.org, gthelen@google.com, ldufour@linux.ibm.com,
	vbabka@suse.cz, posk@google.com, lstoakes@gmail.com,
	peterjung1337@gmail.com, linuxppc-dev@lists.ozlabs.org,
	kent.overstreet@linux.dev, hughlynch@google.com,
	linux-kernel@vger.kernel.org, hannes@cmpxchg.org,
	akpm@linux-foundation.org, tatashin@google.com,
	mgorman@techsingularity.ne t
Subject: Re: [PATCH 12/41] mm: add per-VMA lock and helper functions to control it
Date: Tue, 17 Jan 2023 16:04:25 +0100	[thread overview]
Message-ID: <Y8a4+bV1dYNAiUkD@dhcp22.suse.cz> (raw)
In-Reply-To: <20230109205336.3665937-13-surenb@google.com>

On Mon 09-01-23 12:53:07, Suren Baghdasaryan wrote:
> Introduce a per-VMA rw_semaphore to be used during page fault handling
> instead of mmap_lock. Because there are cases when multiple VMAs need
> to be exclusively locked during VMA tree modifications, instead of the
> usual lock/unlock patter we mark a VMA as locked by taking per-VMA lock
> exclusively and setting vma->lock_seq to the current mm->lock_seq. When
> mmap_write_lock holder is done with all modifications and drops mmap_lock,
> it will increment mm->lock_seq, effectively unlocking all VMAs marked as
> locked.

I have to say I was struggling a bit with the above and only understood
what you mean by reading the patch several times. I would phrase it like
this (feel free to use if you consider this to be an improvement).

Introduce a per-VMA rw_semaphore. The lock implementation relies on a
per-vma and per-mm sequence counters to note exclusive locking:
        - read lock - (implemented by vma_read_trylock) requires the the
          vma (vm_lock_seq) and mm (mm_lock_seq) sequence counters to
          differ. If they match then there must be a vma exclusive lock
          held somewhere.
        - read unlock - (implemented by vma_read_unlock) is a trivial
          vma->lock unlock.
        - write lock - (vma_write_lock) requires the mmap_lock to be
          held exclusively and the current mm counter is noted to the vma
          side. This will allow multiple vmas to be locked under a single
          mmap_lock write lock (e.g. during vma merging). The vma counter
          is modified under exclusive vma lock.
        - write unlock - (vma_write_unlock_mm) is a batch release of all
          vma locks held. It doesn't pair with a specific
          vma_write_lock! It is done before exclusive mmap_lock is
          released by incrementing mm sequence counter (mm_lock_seq).
	- write downgrade - if the mmap_lock is downgraded to the read
	  lock all vma write locks are released as well (effectivelly
	  same as write unlock).

> VMA lock is placed on the cache line boundary so that its 'count' field
> falls into the first cache line while the rest of the fields fall into
> the second cache line. This lets the 'count' field to be cached with
> other frequently accessed fields and used quickly in uncontended case
> while 'owner' and other fields used in the contended case will not
> invalidate the first cache line while waiting on the lock.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> ---
>  include/linux/mm.h        | 80 +++++++++++++++++++++++++++++++++++++++
>  include/linux/mm_types.h  |  8 ++++
>  include/linux/mmap_lock.h | 13 +++++++
>  kernel/fork.c             |  4 ++
>  mm/init-mm.c              |  3 ++
>  5 files changed, 108 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index f3f196e4d66d..ec2c4c227d51 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -612,6 +612,85 @@ struct vm_operations_struct {
>  					  unsigned long addr);
>  };
>  
> +#ifdef CONFIG_PER_VMA_LOCK
> +static inline void vma_init_lock(struct vm_area_struct *vma)
> +{
> +	init_rwsem(&vma->lock);
> +	vma->vm_lock_seq = -1;
> +}
> +
> +static inline void vma_write_lock(struct vm_area_struct *vma)
> +{
> +	int mm_lock_seq;
> +
> +	mmap_assert_write_locked(vma->vm_mm);
> +
> +	/*
> +	 * current task is holding mmap_write_lock, both vma->vm_lock_seq and
> +	 * mm->mm_lock_seq can't be concurrently modified.
> +	 */
> +	mm_lock_seq = READ_ONCE(vma->vm_mm->mm_lock_seq);
> +	if (vma->vm_lock_seq == mm_lock_seq)
> +		return;
> +
> +	down_write(&vma->lock);
> +	vma->vm_lock_seq = mm_lock_seq;
> +	up_write(&vma->lock);
> +}
> +
> +/*
> + * Try to read-lock a vma. The function is allowed to occasionally yield false
> + * locked result to avoid performance overhead, in which case we fall back to
> + * using mmap_lock. The function should never yield false unlocked result.
> + */
> +static inline bool vma_read_trylock(struct vm_area_struct *vma)
> +{
> +	/* Check before locking. A race might cause false locked result. */
> +	if (vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))
> +		return false;
> +
> +	if (unlikely(down_read_trylock(&vma->lock) == 0))
> +		return false;
> +
> +	/*
> +	 * Overflow might produce false locked result.
> +	 * False unlocked result is impossible because we modify and check
> +	 * vma->vm_lock_seq under vma->lock protection and mm->mm_lock_seq
> +	 * modification invalidates all existing locks.
> +	 */
> +	if (unlikely(vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))) {
> +		up_read(&vma->lock);
> +		return false;
> +	}
> +	return true;
> +}
> +
> +static inline void vma_read_unlock(struct vm_area_struct *vma)
> +{
> +	up_read(&vma->lock);
> +}
> +
> +static inline void vma_assert_write_locked(struct vm_area_struct *vma)
> +{
> +	mmap_assert_write_locked(vma->vm_mm);
> +	/*
> +	 * current task is holding mmap_write_lock, both vma->vm_lock_seq and
> +	 * mm->mm_lock_seq can't be concurrently modified.
> +	 */
> +	VM_BUG_ON_VMA(vma->vm_lock_seq != READ_ONCE(vma->vm_mm->mm_lock_seq), vma);
> +}
> +
> +#else /* CONFIG_PER_VMA_LOCK */
> +
> +static inline void vma_init_lock(struct vm_area_struct *vma) {}
> +static inline void vma_write_lock(struct vm_area_struct *vma) {}
> +static inline bool vma_read_trylock(struct vm_area_struct *vma)
> +		{ return false; }
> +static inline void vma_read_unlock(struct vm_area_struct *vma) {}
> +static inline void vma_assert_write_locked(struct vm_area_struct *vma) {}
> +
> +#endif /* CONFIG_PER_VMA_LOCK */
> +
>  static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
>  {
>  	static const struct vm_operations_struct dummy_vm_ops = {};
> @@ -620,6 +699,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
>  	vma->vm_mm = mm;
>  	vma->vm_ops = &dummy_vm_ops;
>  	INIT_LIST_HEAD(&vma->anon_vma_chain);
> +	vma_init_lock(vma);
>  }
>  
>  static inline void vma_set_anonymous(struct vm_area_struct *vma)
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index d5cdec1314fe..5f7c5ca89931 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -555,6 +555,11 @@ struct vm_area_struct {
>  	pgprot_t vm_page_prot;
>  	unsigned long vm_flags;		/* Flags, see mm.h. */
>  
> +#ifdef CONFIG_PER_VMA_LOCK
> +	int vm_lock_seq;
> +	struct rw_semaphore lock;
> +#endif
> +
>  	/*
>  	 * For areas with an address space and backing store,
>  	 * linkage into the address_space->i_mmap interval tree.
> @@ -680,6 +685,9 @@ struct mm_struct {
>  					  * init_mm.mmlist, and are protected
>  					  * by mmlist_lock
>  					  */
> +#ifdef CONFIG_PER_VMA_LOCK
> +		int mm_lock_seq;
> +#endif
>  
>  
>  		unsigned long hiwater_rss; /* High-watermark of RSS usage */
> diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
> index e49ba91bb1f0..40facd4c398b 100644
> --- a/include/linux/mmap_lock.h
> +++ b/include/linux/mmap_lock.h
> @@ -72,6 +72,17 @@ static inline void mmap_assert_write_locked(struct mm_struct *mm)
>  	VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm);
>  }
>  
> +#ifdef CONFIG_PER_VMA_LOCK
> +static inline void vma_write_unlock_mm(struct mm_struct *mm)
> +{
> +	mmap_assert_write_locked(mm);
> +	/* No races during update due to exclusive mmap_lock being held */
> +	WRITE_ONCE(mm->mm_lock_seq, mm->mm_lock_seq + 1);
> +}
> +#else
> +static inline void vma_write_unlock_mm(struct mm_struct *mm) {}
> +#endif
> +
>  static inline void mmap_init_lock(struct mm_struct *mm)
>  {
>  	init_rwsem(&mm->mmap_lock);
> @@ -114,12 +125,14 @@ static inline bool mmap_write_trylock(struct mm_struct *mm)
>  static inline void mmap_write_unlock(struct mm_struct *mm)
>  {
>  	__mmap_lock_trace_released(mm, true);
> +	vma_write_unlock_mm(mm);
>  	up_write(&mm->mmap_lock);
>  }
>  
>  static inline void mmap_write_downgrade(struct mm_struct *mm)
>  {
>  	__mmap_lock_trace_acquire_returned(mm, false, true);
> +	vma_write_unlock_mm(mm);
>  	downgrade_write(&mm->mmap_lock);
>  }
>  
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 5986817f393c..c026d75108b3 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -474,6 +474,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
>  		 */
>  		*new = data_race(*orig);
>  		INIT_LIST_HEAD(&new->anon_vma_chain);
> +		vma_init_lock(new);
>  		dup_anon_vma_name(orig, new);
>  	}
>  	return new;
> @@ -1145,6 +1146,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
>  	seqcount_init(&mm->write_protect_seq);
>  	mmap_init_lock(mm);
>  	INIT_LIST_HEAD(&mm->mmlist);
> +#ifdef CONFIG_PER_VMA_LOCK
> +	WRITE_ONCE(mm->mm_lock_seq, 0);
> +#endif
>  	mm_pgtables_bytes_init(mm);
>  	mm->map_count = 0;
>  	mm->locked_vm = 0;
> diff --git a/mm/init-mm.c b/mm/init-mm.c
> index c9327abb771c..33269314e060 100644
> --- a/mm/init-mm.c
> +++ b/mm/init-mm.c
> @@ -37,6 +37,9 @@ struct mm_struct init_mm = {
>  	.page_table_lock =  __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock),
>  	.arg_lock	=  __SPIN_LOCK_UNLOCKED(init_mm.arg_lock),
>  	.mmlist		= LIST_HEAD_INIT(init_mm.mmlist),
> +#ifdef CONFIG_PER_VMA_LOCK
> +	.mm_lock_seq	= 0,
> +#endif
>  	.user_ns	= &init_user_ns,
>  	.cpu_bitmap	= CPU_BITS_NONE,
>  #ifdef CONFIG_IOMMU_SVA
> -- 
> 2.39.0

-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@suse.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org,
	jglisse@google.com, vbabka@suse.cz, hannes@cmpxchg.org,
	mgorman@techsingularity.net, dave@stgolabs.net,
	willy@infradead.org, liam.howlett@oracle.com,
	peterz@infradead.org, ldufour@linux.ibm.com,
	laurent.dufour@fr.ibm.com, paulmck@kernel.org, luto@kernel.org,
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com,
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de,
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com,
	lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, linux-mm@kvack.org,
	linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, kernel-team@android.com
Subject: Re: [PATCH 12/41] mm: add per-VMA lock and helper functions to control it
Date: Tue, 17 Jan 2023 16:04:25 +0100	[thread overview]
Message-ID: <Y8a4+bV1dYNAiUkD@dhcp22.suse.cz> (raw)
In-Reply-To: <20230109205336.3665937-13-surenb@google.com>

On Mon 09-01-23 12:53:07, Suren Baghdasaryan wrote:
> Introduce a per-VMA rw_semaphore to be used during page fault handling
> instead of mmap_lock. Because there are cases when multiple VMAs need
> to be exclusively locked during VMA tree modifications, instead of the
> usual lock/unlock patter we mark a VMA as locked by taking per-VMA lock
> exclusively and setting vma->lock_seq to the current mm->lock_seq. When
> mmap_write_lock holder is done with all modifications and drops mmap_lock,
> it will increment mm->lock_seq, effectively unlocking all VMAs marked as
> locked.

I have to say I was struggling a bit with the above and only understood
what you mean by reading the patch several times. I would phrase it like
this (feel free to use if you consider this to be an improvement).

Introduce a per-VMA rw_semaphore. The lock implementation relies on a
per-vma and per-mm sequence counters to note exclusive locking:
        - read lock - (implemented by vma_read_trylock) requires the the
          vma (vm_lock_seq) and mm (mm_lock_seq) sequence counters to
          differ. If they match then there must be a vma exclusive lock
          held somewhere.
        - read unlock - (implemented by vma_read_unlock) is a trivial
          vma->lock unlock.
        - write lock - (vma_write_lock) requires the mmap_lock to be
          held exclusively and the current mm counter is noted to the vma
          side. This will allow multiple vmas to be locked under a single
          mmap_lock write lock (e.g. during vma merging). The vma counter
          is modified under exclusive vma lock.
        - write unlock - (vma_write_unlock_mm) is a batch release of all
          vma locks held. It doesn't pair with a specific
          vma_write_lock! It is done before exclusive mmap_lock is
          released by incrementing mm sequence counter (mm_lock_seq).
	- write downgrade - if the mmap_lock is downgraded to the read
	  lock all vma write locks are released as well (effectivelly
	  same as write unlock).

> VMA lock is placed on the cache line boundary so that its 'count' field
> falls into the first cache line while the rest of the fields fall into
> the second cache line. This lets the 'count' field to be cached with
> other frequently accessed fields and used quickly in uncontended case
> while 'owner' and other fields used in the contended case will not
> invalidate the first cache line while waiting on the lock.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> ---
>  include/linux/mm.h        | 80 +++++++++++++++++++++++++++++++++++++++
>  include/linux/mm_types.h  |  8 ++++
>  include/linux/mmap_lock.h | 13 +++++++
>  kernel/fork.c             |  4 ++
>  mm/init-mm.c              |  3 ++
>  5 files changed, 108 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index f3f196e4d66d..ec2c4c227d51 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -612,6 +612,85 @@ struct vm_operations_struct {
>  					  unsigned long addr);
>  };
>  
> +#ifdef CONFIG_PER_VMA_LOCK
> +static inline void vma_init_lock(struct vm_area_struct *vma)
> +{
> +	init_rwsem(&vma->lock);
> +	vma->vm_lock_seq = -1;
> +}
> +
> +static inline void vma_write_lock(struct vm_area_struct *vma)
> +{
> +	int mm_lock_seq;
> +
> +	mmap_assert_write_locked(vma->vm_mm);
> +
> +	/*
> +	 * current task is holding mmap_write_lock, both vma->vm_lock_seq and
> +	 * mm->mm_lock_seq can't be concurrently modified.
> +	 */
> +	mm_lock_seq = READ_ONCE(vma->vm_mm->mm_lock_seq);
> +	if (vma->vm_lock_seq == mm_lock_seq)
> +		return;
> +
> +	down_write(&vma->lock);
> +	vma->vm_lock_seq = mm_lock_seq;
> +	up_write(&vma->lock);
> +}
> +
> +/*
> + * Try to read-lock a vma. The function is allowed to occasionally yield false
> + * locked result to avoid performance overhead, in which case we fall back to
> + * using mmap_lock. The function should never yield false unlocked result.
> + */
> +static inline bool vma_read_trylock(struct vm_area_struct *vma)
> +{
> +	/* Check before locking. A race might cause false locked result. */
> +	if (vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))
> +		return false;
> +
> +	if (unlikely(down_read_trylock(&vma->lock) == 0))
> +		return false;
> +
> +	/*
> +	 * Overflow might produce false locked result.
> +	 * False unlocked result is impossible because we modify and check
> +	 * vma->vm_lock_seq under vma->lock protection and mm->mm_lock_seq
> +	 * modification invalidates all existing locks.
> +	 */
> +	if (unlikely(vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))) {
> +		up_read(&vma->lock);
> +		return false;
> +	}
> +	return true;
> +}
> +
> +static inline void vma_read_unlock(struct vm_area_struct *vma)
> +{
> +	up_read(&vma->lock);
> +}
> +
> +static inline void vma_assert_write_locked(struct vm_area_struct *vma)
> +{
> +	mmap_assert_write_locked(vma->vm_mm);
> +	/*
> +	 * current task is holding mmap_write_lock, both vma->vm_lock_seq and
> +	 * mm->mm_lock_seq can't be concurrently modified.
> +	 */
> +	VM_BUG_ON_VMA(vma->vm_lock_seq != READ_ONCE(vma->vm_mm->mm_lock_seq), vma);
> +}
> +
> +#else /* CONFIG_PER_VMA_LOCK */
> +
> +static inline void vma_init_lock(struct vm_area_struct *vma) {}
> +static inline void vma_write_lock(struct vm_area_struct *vma) {}
> +static inline bool vma_read_trylock(struct vm_area_struct *vma)
> +		{ return false; }
> +static inline void vma_read_unlock(struct vm_area_struct *vma) {}
> +static inline void vma_assert_write_locked(struct vm_area_struct *vma) {}
> +
> +#endif /* CONFIG_PER_VMA_LOCK */
> +
>  static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
>  {
>  	static const struct vm_operations_struct dummy_vm_ops = {};
> @@ -620,6 +699,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
>  	vma->vm_mm = mm;
>  	vma->vm_ops = &dummy_vm_ops;
>  	INIT_LIST_HEAD(&vma->anon_vma_chain);
> +	vma_init_lock(vma);
>  }
>  
>  static inline void vma_set_anonymous(struct vm_area_struct *vma)
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index d5cdec1314fe..5f7c5ca89931 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -555,6 +555,11 @@ struct vm_area_struct {
>  	pgprot_t vm_page_prot;
>  	unsigned long vm_flags;		/* Flags, see mm.h. */
>  
> +#ifdef CONFIG_PER_VMA_LOCK
> +	int vm_lock_seq;
> +	struct rw_semaphore lock;
> +#endif
> +
>  	/*
>  	 * For areas with an address space and backing store,
>  	 * linkage into the address_space->i_mmap interval tree.
> @@ -680,6 +685,9 @@ struct mm_struct {
>  					  * init_mm.mmlist, and are protected
>  					  * by mmlist_lock
>  					  */
> +#ifdef CONFIG_PER_VMA_LOCK
> +		int mm_lock_seq;
> +#endif
>  
>  
>  		unsigned long hiwater_rss; /* High-watermark of RSS usage */
> diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
> index e49ba91bb1f0..40facd4c398b 100644
> --- a/include/linux/mmap_lock.h
> +++ b/include/linux/mmap_lock.h
> @@ -72,6 +72,17 @@ static inline void mmap_assert_write_locked(struct mm_struct *mm)
>  	VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm);
>  }
>  
> +#ifdef CONFIG_PER_VMA_LOCK
> +static inline void vma_write_unlock_mm(struct mm_struct *mm)
> +{
> +	mmap_assert_write_locked(mm);
> +	/* No races during update due to exclusive mmap_lock being held */
> +	WRITE_ONCE(mm->mm_lock_seq, mm->mm_lock_seq + 1);
> +}
> +#else
> +static inline void vma_write_unlock_mm(struct mm_struct *mm) {}
> +#endif
> +
>  static inline void mmap_init_lock(struct mm_struct *mm)
>  {
>  	init_rwsem(&mm->mmap_lock);
> @@ -114,12 +125,14 @@ static inline bool mmap_write_trylock(struct mm_struct *mm)
>  static inline void mmap_write_unlock(struct mm_struct *mm)
>  {
>  	__mmap_lock_trace_released(mm, true);
> +	vma_write_unlock_mm(mm);
>  	up_write(&mm->mmap_lock);
>  }
>  
>  static inline void mmap_write_downgrade(struct mm_struct *mm)
>  {
>  	__mmap_lock_trace_acquire_returned(mm, false, true);
> +	vma_write_unlock_mm(mm);
>  	downgrade_write(&mm->mmap_lock);
>  }
>  
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 5986817f393c..c026d75108b3 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -474,6 +474,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
>  		 */
>  		*new = data_race(*orig);
>  		INIT_LIST_HEAD(&new->anon_vma_chain);
> +		vma_init_lock(new);
>  		dup_anon_vma_name(orig, new);
>  	}
>  	return new;
> @@ -1145,6 +1146,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
>  	seqcount_init(&mm->write_protect_seq);
>  	mmap_init_lock(mm);
>  	INIT_LIST_HEAD(&mm->mmlist);
> +#ifdef CONFIG_PER_VMA_LOCK
> +	WRITE_ONCE(mm->mm_lock_seq, 0);
> +#endif
>  	mm_pgtables_bytes_init(mm);
>  	mm->map_count = 0;
>  	mm->locked_vm = 0;
> diff --git a/mm/init-mm.c b/mm/init-mm.c
> index c9327abb771c..33269314e060 100644
> --- a/mm/init-mm.c
> +++ b/mm/init-mm.c
> @@ -37,6 +37,9 @@ struct mm_struct init_mm = {
>  	.page_table_lock =  __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock),
>  	.arg_lock	=  __SPIN_LOCK_UNLOCKED(init_mm.arg_lock),
>  	.mmlist		= LIST_HEAD_INIT(init_mm.mmlist),
> +#ifdef CONFIG_PER_VMA_LOCK
> +	.mm_lock_seq	= 0,
> +#endif
>  	.user_ns	= &init_user_ns,
>  	.cpu_bitmap	= CPU_BITS_NONE,
>  #ifdef CONFIG_IOMMU_SVA
> -- 
> 2.39.0

-- 
Michal Hocko
SUSE Labs

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2023-01-17 15:04 UTC|newest]

Thread overview: 548+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-09 20:52 [PATCH 00/41] Per-VMA locks Suren Baghdasaryan
2023-01-09 20:52 ` Suren Baghdasaryan
2023-01-09 20:52 ` Suren Baghdasaryan
2023-01-09 20:52 ` [PATCH 01/41] maple_tree: Be more cautious about dead nodes Suren Baghdasaryan
2023-01-09 20:52   ` Suren Baghdasaryan
2023-01-09 20:52   ` Suren Baghdasaryan
2023-01-09 20:52 ` [PATCH 02/41] maple_tree: Detect dead nodes in mas_start() Suren Baghdasaryan
2023-01-09 20:52   ` Suren Baghdasaryan
2023-01-09 20:52   ` Suren Baghdasaryan
2023-01-09 20:52 ` [PATCH 03/41] maple_tree: Fix freeing of nodes in rcu mode Suren Baghdasaryan
2023-01-09 20:52   ` Suren Baghdasaryan
2023-01-09 20:52   ` Suren Baghdasaryan
2023-01-09 20:52 ` [PATCH 04/41] maple_tree: remove extra smp_wmb() from mas_dead_leaves() Suren Baghdasaryan
2023-01-09 20:52   ` Suren Baghdasaryan
2023-01-09 20:52   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 05/41] maple_tree: Fix write memory barrier of nodes once dead for RCU mode Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 06/41] maple_tree: Add smp_rmb() to dead node detection Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 07/41] mm: Enable maple tree RCU mode by default Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 08/41] mm: introduce CONFIG_PER_VMA_LOCK Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-11  0:13   ` Davidlohr Bueso
2023-01-11  0:13     ` Davidlohr Bueso
2023-01-11  0:13     ` Davidlohr Bueso
2023-01-11  0:44     ` Suren Baghdasaryan
2023-01-11  0:44       ` Suren Baghdasaryan
2023-01-11  8:23       ` Michal Hocko
2023-01-11  8:23         ` Michal Hocko
2023-01-11  8:23         ` Michal Hocko
2023-01-11  9:54         ` Ingo Molnar
2023-01-11  9:54           ` Ingo Molnar
2023-01-11  9:54           ` Ingo Molnar
2023-01-11 10:02           ` David Laight
2023-01-11 10:02             ` David Laight
2023-01-11 16:28             ` Suren Baghdasaryan
2023-01-11 16:28               ` Suren Baghdasaryan
2023-01-11 16:28               ` Suren Baghdasaryan
2023-01-11 16:44               ` Michal Hocko
2023-01-11 16:44                 ` Michal Hocko
2023-01-11 16:44                 ` Michal Hocko
2023-01-11 17:04                 ` Suren Baghdasaryan
2023-01-11 17:04                   ` Suren Baghdasaryan
2023-01-11 17:04                   ` Suren Baghdasaryan
2023-01-11 17:37                   ` Michal Hocko
2023-01-11 17:37                     ` Michal Hocko
2023-01-11 17:37                     ` Michal Hocko
2023-01-11 17:49                     ` Suren Baghdasaryan
2023-01-11 17:49                       ` Suren Baghdasaryan
2023-01-11 17:49                       ` Suren Baghdasaryan
2023-01-11 18:02                       ` Michal Hocko
2023-01-11 18:02                         ` Michal Hocko
2023-01-11 18:02                         ` Michal Hocko
2023-01-11 18:09                         ` Suren Baghdasaryan
2023-01-11 18:09                           ` Suren Baghdasaryan
2023-01-11 18:09                           ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 09/41] mm: rcu safe VMA freeing Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-17 14:25   ` Michal Hocko
2023-01-17 14:25     ` Michal Hocko
2023-01-17 14:25     ` Michal Hocko
2023-01-18  2:16     ` Suren Baghdasaryan
2023-01-18  2:16       ` Suren Baghdasaryan
2023-01-18  2:16       ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 10/41] mm: move mmap_lock assert function definitions Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 11/41] mm: export dump_mm() Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 12/41] mm: add per-VMA lock and helper functions to control it Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-17 15:04   ` Michal Hocko [this message]
2023-01-17 15:04     ` Michal Hocko
2023-01-17 15:04     ` Michal Hocko
2023-01-17 15:12     ` Michal Hocko
2023-01-17 15:12       ` Michal Hocko
2023-01-17 15:12       ` Michal Hocko
2023-01-17 21:21       ` Suren Baghdasaryan
2023-01-17 21:21         ` Suren Baghdasaryan
2023-01-17 21:21         ` Suren Baghdasaryan
2023-01-17 21:54         ` Matthew Wilcox
2023-01-17 21:54           ` Matthew Wilcox
2023-01-17 21:54           ` Matthew Wilcox
2023-01-17 22:33           ` Suren Baghdasaryan
2023-01-17 22:33             ` Suren Baghdasaryan
2023-01-17 22:33             ` Suren Baghdasaryan
2023-01-18  9:18           ` Michal Hocko
2023-01-18  9:18             ` Michal Hocko
2023-01-18  9:18             ` Michal Hocko
2023-01-17 21:08     ` Suren Baghdasaryan
2023-01-17 21:08       ` Suren Baghdasaryan
2023-01-17 21:08       ` Suren Baghdasaryan
2023-01-17 15:07   ` Michal Hocko
2023-01-17 15:07     ` Michal Hocko
2023-01-17 15:07     ` Michal Hocko
2023-01-17 21:09     ` Suren Baghdasaryan
2023-01-17 21:09       ` Suren Baghdasaryan
2023-01-17 21:09       ` Suren Baghdasaryan
2023-01-17 18:02   ` Jann Horn
2023-01-17 18:02     ` Jann Horn
2023-01-17 18:02     ` Jann Horn
2023-01-17 21:28     ` Suren Baghdasaryan
2023-01-17 21:28       ` Suren Baghdasaryan
2023-01-17 21:28       ` Suren Baghdasaryan
2023-01-17 21:45       ` Jann Horn
2023-01-17 21:45         ` Jann Horn
2023-01-17 21:45         ` Jann Horn
2023-01-17 22:36         ` Suren Baghdasaryan
2023-01-17 22:36           ` Suren Baghdasaryan
2023-01-17 22:36           ` Suren Baghdasaryan
2023-01-17 23:15           ` Matthew Wilcox
2023-01-17 23:15             ` Matthew Wilcox
2023-01-17 23:15             ` Matthew Wilcox
2023-11-22 14:04         ` Alexander Gordeev
2023-11-22 14:04           ` Alexander Gordeev
2023-11-22 14:04           ` Alexander Gordeev
2023-01-18 12:28     ` Michal Hocko
2023-01-18 12:28       ` Michal Hocko
2023-01-18 12:28       ` Michal Hocko
2023-01-18 13:09       ` David Laight
2023-01-18 13:09         ` David Laight
2023-01-18 13:23       ` Jann Horn
2023-01-18 13:23         ` Jann Horn
2023-01-18 13:23         ` Jann Horn
2023-01-18 15:11         ` Michal Hocko
2023-01-18 15:11           ` Michal Hocko
2023-01-18 15:11           ` Michal Hocko
2023-01-18 17:36           ` Suren Baghdasaryan
2023-01-18 17:36             ` Suren Baghdasaryan
2023-01-18 17:36             ` Suren Baghdasaryan
2023-01-18 21:28             ` Michal Hocko
2023-01-18 21:28               ` Michal Hocko
2023-01-18 21:28               ` Michal Hocko
2023-01-18 21:45               ` Suren Baghdasaryan
2023-01-18 21:45                 ` Suren Baghdasaryan
2023-01-18 21:45                 ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 13/41] mm: introduce vma->vm_flags modifier functions Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-11 15:47   ` Davidlohr Bueso
2023-01-11 15:47     ` Davidlohr Bueso
2023-01-11 15:47     ` Davidlohr Bueso
2023-01-11 17:36     ` Suren Baghdasaryan
2023-01-11 17:36       ` Suren Baghdasaryan
2023-01-11 19:52       ` Davidlohr Bueso
2023-01-11 19:52         ` Davidlohr Bueso
2023-01-11 19:52         ` Davidlohr Bueso
2023-01-11 21:23         ` Suren Baghdasaryan
2023-01-11 21:23           ` Suren Baghdasaryan
2023-01-17 15:09   ` Michal Hocko
2023-01-17 15:09     ` Michal Hocko
2023-01-17 15:09     ` Michal Hocko
2023-01-17 15:15     ` Michal Hocko
2023-01-17 15:15       ` Michal Hocko
2023-01-17 15:15       ` Michal Hocko
2023-01-18  2:07       ` Suren Baghdasaryan
2023-01-18  2:07         ` Suren Baghdasaryan
2023-01-18  2:07         ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 14/41] mm: replace VM_LOCKED_CLEAR_MASK with VM_LOCKED_MASK Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 15/41] mm: replace vma->vm_flags direct modifications with modifier calls Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 16/41] mm: replace vma->vm_flags indirect modification in ksm_madvise Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 17/41] mm/mmap: move VMA locking before anon_vma_lock_write call Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-17 15:16   ` Michal Hocko
2023-01-17 15:16     ` Michal Hocko
2023-01-17 15:16     ` Michal Hocko
2023-01-18  2:01     ` Suren Baghdasaryan
2023-01-18  2:01       ` Suren Baghdasaryan
2023-01-18  2:01       ` Suren Baghdasaryan
2023-01-18  9:23       ` Michal Hocko
2023-01-18  9:23         ` Michal Hocko
2023-01-18  9:23         ` Michal Hocko
2023-01-18 18:09         ` Suren Baghdasaryan
2023-01-18 18:09           ` Suren Baghdasaryan
2023-01-18 18:09           ` Suren Baghdasaryan
2023-01-18 21:33           ` Michal Hocko
2023-01-18 21:33             ` Michal Hocko
2023-01-18 21:33             ` Michal Hocko
2023-01-18 21:48             ` Suren Baghdasaryan
2023-01-18 21:48               ` Suren Baghdasaryan
2023-01-18 21:48               ` Suren Baghdasaryan
2023-01-19  9:31               ` Michal Hocko
2023-01-19  9:31                 ` Michal Hocko
2023-01-19  9:31                 ` Michal Hocko
2023-01-19 18:53                 ` Suren Baghdasaryan
2023-01-19 18:53                   ` Suren Baghdasaryan
2023-01-19 18:53                   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 18/41] mm/khugepaged: write-lock VMA while collapsing a huge page Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-17 15:25   ` Michal Hocko
2023-01-17 15:25     ` Michal Hocko
2023-01-17 15:25     ` Michal Hocko
2023-01-17 20:28     ` Jann Horn
2023-01-17 20:28       ` Jann Horn
2023-01-17 20:28       ` Jann Horn
2023-01-17 21:05       ` Suren Baghdasaryan
2023-01-17 21:05         ` Suren Baghdasaryan
2023-01-17 21:05         ` Suren Baghdasaryan
2023-01-18  9:40       ` Michal Hocko
2023-01-18  9:40         ` Michal Hocko
2023-01-18  9:40         ` Michal Hocko
2023-01-18 12:38         ` Jann Horn
2023-01-18 12:38           ` Jann Horn
2023-01-18 12:38           ` Jann Horn
2023-01-18 17:41         ` Suren Baghdasaryan
2023-01-18 17:41           ` Suren Baghdasaryan
2023-01-18 17:41           ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 19/41] mm/mmap: write-lock VMAs before merging, splitting or expanding them Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 20/41] mm/mmap: write-lock VMAs in vma_adjust Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 21/41] mm/mmap: write-lock VMAs affected by VMA expansion Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 22/41] mm/mremap: write-lock VMA while remapping it to a new address range Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 23/41] mm: write-lock VMAs before removing them from VMA tree Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 24/41] mm: conditionally write-lock VMA in free_pgtables Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 25/41] mm/mmap: write-lock adjacent VMAs if they can grow into unmapped area Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 26/41] kernel/fork: assert no VMA readers during its destruction Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-17 15:42   ` Michal Hocko
2023-01-17 15:42     ` Michal Hocko
2023-01-17 15:42     ` Michal Hocko
2023-01-18  1:53     ` Suren Baghdasaryan
2023-01-18  1:53       ` Suren Baghdasaryan
2023-01-18  1:53       ` Suren Baghdasaryan
2023-01-18  9:43       ` Michal Hocko
2023-01-18  9:43         ` Michal Hocko
2023-01-18  9:43         ` Michal Hocko
2023-01-18 18:06         ` Suren Baghdasaryan
2023-01-18 18:06           ` Suren Baghdasaryan
2023-01-18 18:06           ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 27/41] mm/mmap: prevent pagefault handler from racing with mmu_notifier registration Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-18 12:50   ` Jann Horn
2023-01-18 12:50     ` Jann Horn
2023-01-18 12:50     ` Jann Horn
2023-01-18 17:40     ` Suren Baghdasaryan
2023-01-18 17:40       ` Suren Baghdasaryan
2023-01-18 17:40       ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 28/41] mm: introduce lock_vma_under_rcu to be used from arch-specific code Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-17 15:47   ` Michal Hocko
2023-01-17 15:47     ` Michal Hocko
2023-01-17 15:47     ` Michal Hocko
2023-01-18  1:06     ` Suren Baghdasaryan
2023-01-18  1:06       ` Suren Baghdasaryan
2023-01-18  1:06       ` Suren Baghdasaryan
2023-01-18  2:44       ` Matthew Wilcox
2023-01-18  2:44         ` Matthew Wilcox
2023-01-18  2:44         ` Matthew Wilcox
2023-01-18 21:33         ` Suren Baghdasaryan
2023-01-18 21:33           ` Suren Baghdasaryan
2023-01-18 21:33           ` Suren Baghdasaryan
2023-01-17 21:03   ` Jann Horn
2023-01-17 21:03     ` Jann Horn
2023-01-17 21:03     ` Jann Horn
2023-01-17 23:18     ` Liam Howlett
2023-01-17 23:18       ` Liam Howlett
2023-01-17 23:18       ` Liam Howlett
2023-01-09 20:53 ` [PATCH 29/41] mm: fall back to mmap_lock if vma->anon_vma is not yet set Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 30/41] mm: add FAULT_FLAG_VMA_LOCK flag Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 31/41] mm: prevent do_swap_page from handling page faults under VMA lock Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 32/41] mm: prevent userfaults to be handled under per-vma lock Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-17 19:51   ` Jann Horn
2023-01-17 19:51     ` Jann Horn
2023-01-17 19:51     ` Jann Horn
2023-01-17 20:36     ` Jann Horn
2023-01-17 20:36       ` Jann Horn
2023-01-17 20:36       ` Jann Horn
2023-01-17 20:57       ` Suren Baghdasaryan
2023-01-17 20:57         ` Suren Baghdasaryan
2023-01-17 20:57         ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 33/41] mm: introduce per-VMA lock statistics Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 34/41] x86/mm: try VMA lock-based page fault handling first Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 35/41] arm64/mm: " Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 36/41] powerc/mm: " Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 37/41] mm: introduce mod_vm_flags_nolock Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 38/41] mm: avoid assertion in untrack_pfn Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 39/41] kernel/fork: throttle call_rcu() calls in vm_area_free Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-17 15:57   ` Michal Hocko
2023-01-17 15:57     ` Michal Hocko
2023-01-17 15:57     ` Michal Hocko
2023-01-18  1:19     ` Suren Baghdasaryan
2023-01-18  1:19       ` Suren Baghdasaryan
2023-01-18  1:19       ` Suren Baghdasaryan
2023-01-18  9:49       ` Michal Hocko
2023-01-18  9:49         ` Michal Hocko
2023-01-18  9:49         ` Michal Hocko
2023-01-18 18:04         ` Suren Baghdasaryan
2023-01-18 18:04           ` Suren Baghdasaryan
2023-01-18 18:04           ` Suren Baghdasaryan
2023-01-18 18:34           ` Paul E. McKenney
2023-01-18 18:34             ` Paul E. McKenney
2023-01-18 18:34             ` Paul E. McKenney
2023-01-18 19:01             ` Suren Baghdasaryan
2023-01-18 19:01               ` Suren Baghdasaryan
2023-01-18 19:01               ` Suren Baghdasaryan
2023-01-18 20:20               ` Paul E. McKenney
2023-01-18 20:20                 ` Paul E. McKenney
2023-01-18 20:20                 ` Paul E. McKenney
2023-01-19 12:52               ` Michal Hocko
2023-01-19 12:52                 ` Michal Hocko
2023-01-19 12:52                 ` Michal Hocko
2023-01-19 19:17                 ` Paul E. McKenney
2023-01-19 19:17                   ` Paul E. McKenney
2023-01-19 19:17                   ` Paul E. McKenney
2023-01-20  8:57                   ` Michal Hocko
2023-01-20  8:57                     ` Michal Hocko
2023-01-20  8:57                     ` Michal Hocko
2023-01-20 16:08                     ` Paul E. McKenney
2023-01-20 16:08                       ` Paul E. McKenney
2023-01-20 16:08                       ` Paul E. McKenney
2023-01-19 12:59   ` Michal Hocko
2023-01-19 12:59     ` Michal Hocko
2023-01-19 12:59     ` Michal Hocko
2023-01-19 18:52     ` Suren Baghdasaryan
2023-01-19 18:52       ` Suren Baghdasaryan
2023-01-19 18:52       ` Suren Baghdasaryan
2023-01-19 19:20       ` Paul E. McKenney
2023-01-19 19:20         ` Paul E. McKenney
2023-01-19 19:20         ` Paul E. McKenney
2023-01-19 19:47         ` Suren Baghdasaryan
2023-01-19 19:47           ` Suren Baghdasaryan
2023-01-19 19:47           ` Suren Baghdasaryan
2023-01-19 19:55           ` Paul E. McKenney
2023-01-19 19:55             ` Paul E. McKenney
2023-01-19 19:55             ` Paul E. McKenney
2023-01-20  8:52       ` Michal Hocko
2023-01-20  8:52         ` Michal Hocko
2023-01-20  8:52         ` Michal Hocko
2023-01-20 16:20         ` Suren Baghdasaryan
2023-01-20 16:20           ` Suren Baghdasaryan
2023-01-20 16:20           ` Suren Baghdasaryan
2023-01-20 16:45           ` Suren Baghdasaryan
2023-01-20 16:45             ` Suren Baghdasaryan
2023-01-20 16:45             ` Suren Baghdasaryan
2023-01-20 16:49             ` Matthew Wilcox
2023-01-20 16:49               ` Matthew Wilcox
2023-01-20 16:49               ` Matthew Wilcox
2023-01-20 17:08               ` Liam R. Howlett
2023-01-20 17:08                 ` Liam R. Howlett
2023-01-20 17:08                 ` Liam R. Howlett
2023-01-20 17:17                 ` Suren Baghdasaryan
2023-01-20 17:17                   ` Suren Baghdasaryan
2023-01-20 17:32                   ` Matthew Wilcox
2023-01-20 17:32                     ` Matthew Wilcox
2023-01-20 17:32                     ` Matthew Wilcox
2023-01-20 17:50                     ` Suren Baghdasaryan
2023-01-20 17:50                       ` Suren Baghdasaryan
2023-01-20 17:50                       ` Suren Baghdasaryan
2023-01-20 19:23                       ` Liam R. Howlett
2023-01-20 19:23                         ` Liam R. Howlett
2023-01-20 19:23                         ` Liam R. Howlett
2023-01-23  9:56                       ` Michal Hocko
2023-01-23  9:56                         ` Michal Hocko
2023-01-23  9:56                         ` Michal Hocko
2023-01-23 16:22                         ` Suren Baghdasaryan
2023-01-23 16:22                           ` Suren Baghdasaryan
2023-01-23 16:22                           ` Suren Baghdasaryan
2023-01-23 16:55                           ` Michal Hocko
2023-01-23 16:55                             ` Michal Hocko
2023-01-23 16:55                             ` Michal Hocko
2023-01-23 17:07                             ` Suren Baghdasaryan
2023-01-23 17:07                               ` Suren Baghdasaryan
2023-01-23 17:07                               ` Suren Baghdasaryan
2023-01-23 17:16                               ` Michal Hocko
2023-01-23 17:16                                 ` Michal Hocko
2023-01-23 17:16                                 ` Michal Hocko
2023-01-23 17:46                                 ` Suren Baghdasaryan
2023-01-23 17:46                                   ` Suren Baghdasaryan
2023-01-23 17:46                                   ` Suren Baghdasaryan
2023-01-23 18:23                                   ` Matthew Wilcox
2023-01-23 18:23                                     ` Matthew Wilcox
2023-01-23 18:23                                     ` Matthew Wilcox
2023-01-23 18:47                                     ` Suren Baghdasaryan
2023-01-23 18:47                                       ` Suren Baghdasaryan
2023-01-23 18:47                                       ` Suren Baghdasaryan
2023-01-23 19:18                                     ` Michal Hocko
2023-01-23 19:18                                       ` Michal Hocko
2023-01-23 19:18                                       ` Michal Hocko
2023-01-23 19:30                                       ` Matthew Wilcox
2023-01-23 19:30                                         ` Matthew Wilcox
2023-01-23 19:30                                         ` Matthew Wilcox
2023-01-23 19:57                                         ` Suren Baghdasaryan
2023-01-23 19:57                                           ` Suren Baghdasaryan
2023-01-23 19:57                                           ` Suren Baghdasaryan
2023-01-23 20:00                                         ` Michal Hocko
2023-01-23 20:00                                           ` Michal Hocko
2023-01-23 20:00                                           ` Michal Hocko
2023-01-23 20:08                                           ` Suren Baghdasaryan
2023-01-23 20:08                                             ` Suren Baghdasaryan
2023-01-23 20:08                                             ` Suren Baghdasaryan
2023-01-23 20:38                                           ` Liam R. Howlett
2023-01-23 20:38                                             ` Liam R. Howlett
2023-01-23 20:38                                             ` Liam R. Howlett
2023-01-20 17:21               ` Paul E. McKenney
2023-01-20 17:21                 ` Paul E. McKenney
2023-01-20 17:21                 ` Paul E. McKenney
2023-01-20 18:42                 ` Suren Baghdasaryan
2023-01-20 18:42                   ` Suren Baghdasaryan
2023-01-20 18:42                   ` Suren Baghdasaryan
2023-01-23  9:59           ` Michal Hocko
2023-01-23  9:59             ` Michal Hocko
2023-01-23  9:59             ` Michal Hocko
2023-01-23 17:43             ` Suren Baghdasaryan
2023-01-23 17:43               ` Suren Baghdasaryan
2023-01-23 17:43               ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 40/41] mm: separate vma->lock from vm_area_struct Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-17 18:33   ` Jann Horn
2023-01-17 18:33     ` Jann Horn
2023-01-17 18:33     ` Jann Horn
2023-01-17 19:01     ` Suren Baghdasaryan
2023-01-17 19:01       ` Suren Baghdasaryan
2023-01-17 19:01       ` Suren Baghdasaryan
2023-01-09 20:53 ` [PATCH 41/41] mm: replace rw_semaphore with atomic_t in vma_lock Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-09 20:53   ` Suren Baghdasaryan
2023-01-10  8:04   ` Vlastimil Babka
2023-01-10  8:04     ` Vlastimil Babka
2023-01-10  8:04     ` Vlastimil Babka
2023-01-10 17:05     ` Suren Baghdasaryan
2023-01-10 17:05       ` Suren Baghdasaryan
2023-01-10 17:05       ` Suren Baghdasaryan
2023-01-16 11:14   ` Hyeonggon Yoo
2023-01-16 11:14     ` Hyeonggon Yoo
2023-01-16 22:36     ` Suren Baghdasaryan
2023-01-16 22:36       ` Suren Baghdasaryan
2023-01-16 22:36       ` Suren Baghdasaryan
2023-01-17  4:14     ` Matthew Wilcox
2023-01-17  4:14       ` Matthew Wilcox
2023-01-17  4:14       ` Matthew Wilcox
2023-01-17  4:34       ` Suren Baghdasaryan
2023-01-17  4:34         ` Suren Baghdasaryan
2023-01-17  4:34         ` Suren Baghdasaryan
2023-01-17  5:46         ` Matthew Wilcox
2023-01-17  5:46           ` Matthew Wilcox
2023-01-17  5:46           ` Matthew Wilcox
2023-01-17  5:58           ` Suren Baghdasaryan
2023-01-17  5:58             ` Suren Baghdasaryan
2023-01-17  5:58             ` Suren Baghdasaryan
2023-01-17 18:23             ` Matthew Wilcox
2023-01-17 18:23               ` Matthew Wilcox
2023-01-17 18:23               ` Matthew Wilcox
2023-01-17 18:28               ` Suren Baghdasaryan
2023-01-17 18:28                 ` Suren Baghdasaryan
2023-01-17 18:28                 ` Suren Baghdasaryan
2023-01-17 20:31                 ` Michal Hocko
2023-01-17 20:31                   ` Michal Hocko
2023-01-17 20:31                   ` Michal Hocko
2023-01-17 21:00                   ` Suren Baghdasaryan
2023-01-17 21:00                     ` Suren Baghdasaryan
2023-01-17 21:00                     ` Suren Baghdasaryan
2023-01-16 14:06   ` Hillf Danton
2023-01-16 23:08     ` Suren Baghdasaryan
2023-01-16 23:11       ` Suren Baghdasaryan
2023-01-17  3:16       ` Hillf Danton
2023-01-17  4:52         ` Suren Baghdasaryan
2023-01-17  8:33           ` Hillf Danton
2023-01-17 18:21             ` Suren Baghdasaryan
2023-01-17 18:27               ` Matthew Wilcox
2023-01-17 18:31                 ` Suren Baghdasaryan
2023-01-18  6:26                 ` Hillf Danton
2023-01-18 18:35                   ` Matthew Wilcox
2023-01-19  0:28                     ` Hillf Danton
2023-01-17 18:11   ` Jann Horn
2023-01-17 18:11     ` Jann Horn
2023-01-17 18:11     ` Jann Horn
2023-01-17 18:26     ` Suren Baghdasaryan
2023-01-17 18:26       ` Suren Baghdasaryan
2023-01-17 18:26       ` Suren Baghdasaryan
2023-01-17 18:31       ` Matthew Wilcox
2023-01-17 18:31         ` Matthew Wilcox
2023-01-17 18:31         ` Matthew Wilcox
2023-01-17 18:36         ` Jann Horn
2023-01-17 18:36           ` Jann Horn
2023-01-17 18:36           ` Jann Horn
2023-01-17 18:49           ` Suren Baghdasaryan
2023-01-17 18:49             ` Suren Baghdasaryan
2023-01-17 18:49             ` Suren Baghdasaryan
2023-01-17 18:36         ` Suren Baghdasaryan
2023-01-17 18:36           ` Suren Baghdasaryan
2023-01-17 18:36           ` Suren Baghdasaryan
2023-01-17 18:48           ` Matthew Wilcox
2023-01-17 18:48             ` Matthew Wilcox
2023-01-17 18:48             ` Matthew Wilcox
2023-01-17 18:55             ` Suren Baghdasaryan
2023-01-17 18:55               ` Suren Baghdasaryan
2023-01-17 18:55               ` Suren Baghdasaryan
2023-01-17 18:59               ` Jann Horn
2023-01-17 18:59                 ` Jann Horn
2023-01-17 18:59                 ` Jann Horn
2023-01-17 19:06                 ` Suren Baghdasaryan
2023-01-17 19:06                   ` Suren Baghdasaryan
2023-01-17 19:06                   ` Suren Baghdasaryan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y8a4+bV1dYNAiUkD@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=arjunroy@google.com \
    --cc=axelrasmussen@google.com \
    --cc=bigeasy@linutronix.de \
    --cc=dave@stgolabs.net \
    --cc=david@redhat.com \
    --cc=dhowells@redhat.com \
    --cc=edumazet@google.com \
    --cc=gthelen@google.com \
    --cc=gurua@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=hughlynch@google.com \
    --cc=jannh@google.com \
    --cc=jglisse@google.com \
    --cc=joelaf@google.com \
    --cc=kent.overstreet@linux.dev \
    --cc=kernel-team@android.com \
    --cc=laurent.dufour@fr.ibm.com \
    --cc=ldufour@linux.ibm.com \
    --cc=leewalsh@google.com \
    --cc=liam.howlett@oracle.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=lstoakes@gmail.com \
    --cc=luto@kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=michel@lespinasse.org \
    --cc=minchan@google.com \
    --cc=paulmck@kernel.org \
    --cc=peterjung1337@gmail.com \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=posk@google.com \
    --cc=punit.agrawal@bytedance.com \
    --cc=rientjes@google.com \
    --cc=shakeelb@google.com \
    --cc=soheil@google.com \
    --cc=songliubraving@fb.com \
    --cc=surenb@google.com \
    --cc=tatashin@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.