linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* apparmor: global buffers spin lock may get contended
@ 2021-07-13 13:19 Sergey Senozhatsky
  2021-08-15  9:47 ` John Johansen
                   ` (2 more replies)
  0 siblings, 3 replies; 21+ messages in thread
From: Sergey Senozhatsky @ 2021-07-13 13:19 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior, John Johansen
  Cc: Peter Zijlstra, Tomasz Figa, linux-kernel, linux-security-module

Hi,

We've notices that apparmor has switched from using per-CPU buffer pool
and per-CPU spin_lock to a global spin_lock in df323337e507a0009d3db1ea.

This seems to be causing some contention on our build machines (with
quite a bit of cores). Because that global spin lock is a part of the
stat() sys call (and perhaps some other)

E.g.

-    9.29%     0.00%  clang++          [kernel.vmlinux]                        
   - 9.28% entry_SYSCALL_64_after_hwframe                                      
      - 8.98% do_syscall_64                                                    
         - 7.43% __do_sys_newlstat                                            
            - 7.43% vfs_statx                                                  
               - 7.18% security_inode_getattr                                  
                  - 7.15% apparmor_inode_getattr                              
                     - aa_path_perm                                            
                        - 3.53% aa_get_buffer                                  
                           - 3.47% _raw_spin_lock                              
                                3.44% native_queued_spin_lock_slowpath        
                        - 3.49% aa_put_buffer.part.0                          
                           - 3.45% _raw_spin_lock                              
                                3.43% native_queued_spin_lock_slowpath   

Can we fix this contention?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: apparmor: global buffers spin lock may get contended
  2021-07-13 13:19 apparmor: global buffers spin lock may get contended Sergey Senozhatsky
@ 2021-08-15  9:47 ` John Johansen
  2022-10-28  9:34 ` John Johansen
       [not found] ` <20221030013028.3557-1-hdanton@sina.com>
  2 siblings, 0 replies; 21+ messages in thread
From: John Johansen @ 2021-08-15  9:47 UTC (permalink / raw)
  To: Sergey Senozhatsky, Sebastian Andrzej Siewior
  Cc: Peter Zijlstra, Tomasz Figa, linux-kernel, linux-security-module

On 7/13/21 6:19 AM, Sergey Senozhatsky wrote:
> Hi,
> 
> We've notices that apparmor has switched from using per-CPU buffer pool
> and per-CPU spin_lock to a global spin_lock in df323337e507a0009d3db1ea.
> 
> This seems to be causing some contention on our build machines (with
> quite a bit of cores). Because that global spin lock is a part of the
> stat() sys call (and perhaps some other)
> 
> E.g.
> 
> -    9.29%     0.00%  clang++          [kernel.vmlinux]                        
>    - 9.28% entry_SYSCALL_64_after_hwframe                                      
>       - 8.98% do_syscall_64                                                    
>          - 7.43% __do_sys_newlstat                                            
>             - 7.43% vfs_statx                                                  
>                - 7.18% security_inode_getattr                                  
>                   - 7.15% apparmor_inode_getattr                              
>                      - aa_path_perm                                            
>                         - 3.53% aa_get_buffer                                  
>                            - 3.47% _raw_spin_lock                              
>                                 3.44% native_queued_spin_lock_slowpath        
>                         - 3.49% aa_put_buffer.part.0                          
>                            - 3.45% _raw_spin_lock                              
>                                 3.43% native_queued_spin_lock_slowpath   
> 
> Can we fix this contention?
> 

sorry this got filtered to a wrong mailbox. Yes this is something that can
be improved, and was a concern when the switch was made from per-CPU buffers
to the global pool.

We can look into doing a hybrid approach where we can per cpu cache a buffer
from the global pool. The trick will be coming up with when the cached buffer
can be returned so we don't run into the problems that lead to
df323337e507a0009d3db1ea

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: apparmor: global buffers spin lock may get contended
  2021-07-13 13:19 apparmor: global buffers spin lock may get contended Sergey Senozhatsky
  2021-08-15  9:47 ` John Johansen
@ 2022-10-28  9:34 ` John Johansen
  2022-10-31  3:52   ` Sergey Senozhatsky
       [not found] ` <20221030013028.3557-1-hdanton@sina.com>
  2 siblings, 1 reply; 21+ messages in thread
From: John Johansen @ 2022-10-28  9:34 UTC (permalink / raw)
  To: Sergey Senozhatsky, Sebastian Andrzej Siewior
  Cc: Peter Zijlstra, Tomasz Figa, linux-kernel, linux-security-module

On 7/13/21 06:19, Sergey Senozhatsky wrote:
> Hi,
> 
> We've notices that apparmor has switched from using per-CPU buffer pool
> and per-CPU spin_lock to a global spin_lock in df323337e507a0009d3db1ea.
> 
> This seems to be causing some contention on our build machines (with
> quite a bit of cores). Because that global spin lock is a part of the
> stat() sys call (and perhaps some other)
> 
> E.g.
> 
> -    9.29%     0.00%  clang++          [kernel.vmlinux]
>     - 9.28% entry_SYSCALL_64_after_hwframe
>        - 8.98% do_syscall_64
>           - 7.43% __do_sys_newlstat
>              - 7.43% vfs_statx
>                 - 7.18% security_inode_getattr
>                    - 7.15% apparmor_inode_getattr
>                       - aa_path_perm
>                          - 3.53% aa_get_buffer
>                             - 3.47% _raw_spin_lock
>                                  3.44% native_queued_spin_lock_slowpath
>                          - 3.49% aa_put_buffer.part.0
>                             - 3.45% _raw_spin_lock
>                                  3.43% native_queued_spin_lock_slowpath
> 
> Can we fix this contention?

sorry for the delay on this. Below is a proposed patch that I have been testing
to deal with this issue.


 From d026988196fdbda7234fb87bc3e4aea22edcbaf9 Mon Sep 17 00:00:00 2001
From: John Johansen <john.johansen@canonical.com>
Date: Tue, 25 Oct 2022 01:18:41 -0700
Subject: [PATCH] apparmor: cache buffers on percpu list if there is lock
  contention

On a heavily loaded machine there can be lock contention on the
global buffers lock. Add a percpu list to cache buffers on when
lock contention is encountered.

When allocating buffers attempt to use cached buffers first,
before taking the global buffers lock. When freeing buffers
try to put them back to the global list but if contention is
encountered, put the buffer on the percpu list.

The length of time a buffer is held on the percpu list is dynamically
adjusted based on lock contention.  The amount of hold time is rapidly
increased and slow ramped down.

Signed-off-by: John Johansen <john.johansen@canonical.com>
---
  security/apparmor/lsm.c | 74 ++++++++++++++++++++++++++++++++++++++---
  1 file changed, 69 insertions(+), 5 deletions(-)

diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
index 25114735bc11..0ab70171bdb6 100644
--- a/security/apparmor/lsm.c
+++ b/security/apparmor/lsm.c
@@ -49,12 +49,19 @@ union aa_buffer {
  	char buffer[1];
  };
  
+struct aa_local_cache {
+	unsigned int contention;
+	unsigned int hold;
+	struct list_head head;
+};
+
  #define RESERVE_COUNT 2
  static int reserve_count = RESERVE_COUNT;
  static int buffer_count;
  
  static LIST_HEAD(aa_global_buffers);
  static DEFINE_SPINLOCK(aa_buffers_lock);
+static DEFINE_PER_CPU(struct aa_local_cache, aa_local_buffers);
  
  /*
   * LSM hook functions
@@ -1622,14 +1629,44 @@ static int param_set_mode(const char *val, const struct kernel_param *kp)
  	return 0;
  }
  
+static void update_contention(struct aa_local_cache *cache)
+{
+	cache->contention += 3;
+	if (cache->contention > 9)
+		cache->contention = 9;
+	cache->hold += 1 << cache->contention;		/* 8, 64, 512 */
+}
+
  char *aa_get_buffer(bool in_atomic)
  {
  	union aa_buffer *aa_buf;
+	struct aa_local_cache *cache;
  	bool try_again = true;
  	gfp_t flags = (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
  
+	/* use per cpu cached buffers first */
+	cache = get_cpu_ptr(&aa_local_buffers);
+	if (!list_empty(&cache->head)) {
+		aa_buf = list_first_entry(&cache->head, union aa_buffer, list);
+		list_del(&aa_buf->list);
+		cache->hold--;
+		put_cpu_ptr(&aa_local_buffers);
+		return &aa_buf->buffer[0];
+	}
+	put_cpu_ptr(&aa_local_buffers);
+
+	if (!spin_trylock(&aa_buffers_lock)) {
+		cache = get_cpu_ptr(&aa_local_buffers);
+		update_contention(cache);
+		put_cpu_ptr(&aa_local_buffers);
+		spin_lock(&aa_buffers_lock);
+	} else {
+		cache = get_cpu_ptr(&aa_local_buffers);
+		if (cache->contention)
+			cache->contention--;
+		put_cpu_ptr(&aa_local_buffers);
+	}
  retry:
-	spin_lock(&aa_buffers_lock);
  	if (buffer_count > reserve_count ||
  	    (in_atomic && !list_empty(&aa_global_buffers))) {
  		aa_buf = list_first_entry(&aa_global_buffers, union aa_buffer,
@@ -1655,6 +1692,7 @@ char *aa_get_buffer(bool in_atomic)
  	if (!aa_buf) {
  		if (try_again) {
  			try_again = false;
+			spin_lock(&aa_buffers_lock);
  			goto retry;
  		}
  		pr_warn_once("AppArmor: Failed to allocate a memory buffer.\n");
@@ -1666,15 +1704,32 @@ char *aa_get_buffer(bool in_atomic)
  void aa_put_buffer(char *buf)
  {
  	union aa_buffer *aa_buf;
+	struct aa_local_cache *cache;
  
  	if (!buf)
  		return;
  	aa_buf = container_of(buf, union aa_buffer, buffer[0]);
  
-	spin_lock(&aa_buffers_lock);
-	list_add(&aa_buf->list, &aa_global_buffers);
-	buffer_count++;
-	spin_unlock(&aa_buffers_lock);
+	cache = get_cpu_ptr(&aa_local_buffers);
+	if (!cache->hold) {
+		put_cpu_ptr(&aa_local_buffers);
+		if (spin_trylock(&aa_buffers_lock)) {
+			list_add(&aa_buf->list, &aa_global_buffers);
+			buffer_count++;
+			spin_unlock(&aa_buffers_lock);
+			cache = get_cpu_ptr(&aa_local_buffers);
+			if (cache->contention)
+				cache->contention--;
+			put_cpu_ptr(&aa_local_buffers);
+			return;
+		}
+		cache = get_cpu_ptr(&aa_local_buffers);
+		update_contention(cache);
+	}
+
+	/* cache in percpu list */
+	list_add(&aa_buf->list, &cache->head);
+	put_cpu_ptr(&aa_local_buffers);
  }
  
  /*
@@ -1716,6 +1771,15 @@ static int __init alloc_buffers(void)
  	union aa_buffer *aa_buf;
  	int i, num;
  
+	/*
+	 * per cpu set of cached allocated buffers used to help reduce
+	 * lock contention
+	 */
+	for_each_possible_cpu(i) {
+		per_cpu(aa_local_buffers, i).contention = 0;
+		per_cpu(aa_local_buffers, i).hold = 0;
+		INIT_LIST_HEAD(&per_cpu(aa_local_buffers, i).head);
+	}
  	/*
  	 * A function may require two buffers at once. Usually the buffers are
  	 * used for a short period of time and are shared. On UP kernel buffers
-- 
2.34.1




^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: apparmor: global buffers spin lock may get contended
       [not found] ` <20221030013028.3557-1-hdanton@sina.com>
@ 2022-10-30  6:32   ` John Johansen
  0 siblings, 0 replies; 21+ messages in thread
From: John Johansen @ 2022-10-30  6:32 UTC (permalink / raw)
  To: Hillf Danton
  Cc: Sergey Senozhatsky, Sebastian Andrzej Siewior, Peter Zijlstra,
	Tomasz Figa, linux-kernel, linux-security-module

On 10/29/22 18:30, Hillf Danton wrote:
> On 28 Oct 2022 02:34:07 -0700 John Johansen <john.johansen@canonical.com>
>> On 7/13/21 06:19, Sergey Senozhatsky wrote:
>>> Hi,
>>>
>>> We've notices that apparmor has switched from using per-CPU buffer pool
>>> and per-CPU spin_lock to a global spin_lock in df323337e507a0009d3db1ea.
>>>
>>> This seems to be causing some contention on our build machines (with
>>> quite a bit of cores). Because that global spin lock is a part of the
>>> stat() sys call (and perhaps some other)
>>>
>>> E.g.
>>>
>>> -    9.29%     0.00%  clang++          [kernel.vmlinux]
>>>      - 9.28% entry_SYSCALL_64_after_hwframe
>>>         - 8.98% do_syscall_64
>>>            - 7.43% __do_sys_newlstat
>>>               - 7.43% vfs_statx
>>>                  - 7.18% security_inode_getattr
>>>                     - 7.15% apparmor_inode_getattr
>>>                        - aa_path_perm
>>>                           - 3.53% aa_get_buffer
>>>                              - 3.47% _raw_spin_lock
>>>                                   3.44% native_queued_spin_lock_slowpath
>>>                           - 3.49% aa_put_buffer.part.0
>>>                              - 3.45% _raw_spin_lock
>>>                                   3.43% native_queued_spin_lock_slowpath
>>>
>>> Can we fix this contention?
>>
>> sorry for the delay on this. Below is a proposed patch that I have been testing
>> to deal with this issue.
>>
>>
>>   From d026988196fdbda7234fb87bc3e4aea22edcbaf9 Mon Sep 17 00:00:00 2001
>> From: John Johansen <john.johansen@canonical.com>
>> Date: Tue, 25 Oct 2022 01:18:41 -0700
>> Subject: [PATCH] apparmor: cache buffers on percpu list if there is lock contention
>>
>> On a heavily loaded machine there can be lock contention on the
>> global buffers lock. Add a percpu list to cache buffers on when
>> lock contention is encountered.
>>
>> When allocating buffers attempt to use cached buffers first,
>> before taking the global buffers lock. When freeing buffers
>> try to put them back to the global list but if contention is
>> encountered, put the buffer on the percpu list.
>>
>> The length of time a buffer is held on the percpu list is dynamically
>> adjusted based on lock contention.  The amount of hold time is rapidly
>> increased and slow ramped down.
>>
>> Signed-off-by: John Johansen <john.johansen@canonical.com>
>> ---
>>    security/apparmor/lsm.c | 74 ++++++++++++++++++++++++++++++++++++++---
>>    1 file changed, 69 insertions(+), 5 deletions(-)
>>
>> diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
>> index 25114735bc11..0ab70171bdb6 100644
>> --- a/security/apparmor/lsm.c
>> +++ b/security/apparmor/lsm.c
>> @@ -49,12 +49,19 @@ union aa_buffer {
>>    	char buffer[1];
>>    };
>>    
>> +struct aa_local_cache {
>> +	unsigned int contention;
>> +	unsigned int hold;
>> +	struct list_head head;
>> +};
>> +
>>    #define RESERVE_COUNT 2
>>    static int reserve_count = RESERVE_COUNT;
>>    static int buffer_count;
>>    
>>    static LIST_HEAD(aa_global_buffers);
>>    static DEFINE_SPINLOCK(aa_buffers_lock);
>> +static DEFINE_PER_CPU(struct aa_local_cache, aa_local_buffers);
>>    
>>    /*
>>     * LSM hook functions
>> @@ -1622,14 +1629,44 @@ static int param_set_mode(const char *val, const struct kernel_param *kp)
>>    	return 0;
>>    }
>>    
>> +static void update_contention(struct aa_local_cache *cache)
>> +{
>> +	cache->contention += 3;
>> +	if (cache->contention > 9)
>> +		cache->contention = 9;
>> +	cache->hold += 1 << cache->contention;		/* 8, 64, 512 */
>> +}
>> +
>>    char *aa_get_buffer(bool in_atomic)
>>    {
>>    	union aa_buffer *aa_buf;
>> +	struct aa_local_cache *cache;
>>    	bool try_again = true;
>>    	gfp_t flags = (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
>>    
>> +	/* use per cpu cached buffers first */
>> +	cache = get_cpu_ptr(&aa_local_buffers);
>> +	if (!list_empty(&cache->head)) {
>> +		aa_buf = list_first_entry(&cache->head, union aa_buffer, list);
>> +		list_del(&aa_buf->list);
>> +		cache->hold--;
>> +		put_cpu_ptr(&aa_local_buffers);
>> +		return &aa_buf->buffer[0];
>> +	}
>> +	put_cpu_ptr(&aa_local_buffers);
>> +
>> +	if (!spin_trylock(&aa_buffers_lock)) {
>> +		cache = get_cpu_ptr(&aa_local_buffers);
>> +		update_contention(cache);
>> +		put_cpu_ptr(&aa_local_buffers);
>> +		spin_lock(&aa_buffers_lock);
>> +	} else {
>> +		cache = get_cpu_ptr(&aa_local_buffers);
>> +		if (cache->contention)
>> +			cache->contention--;
>> +		put_cpu_ptr(&aa_local_buffers);
>> +	}
>>    retry:
>> -	spin_lock(&aa_buffers_lock);
>>    	if (buffer_count > reserve_count ||
>>    	    (in_atomic && !list_empty(&aa_global_buffers))) {
>>    		aa_buf = list_first_entry(&aa_global_buffers, union aa_buffer,
>> @@ -1655,6 +1692,7 @@ char *aa_get_buffer(bool in_atomic)
>>    	if (!aa_buf) {
>>    		if (try_again) {
>>    			try_again = false;
>> +			spin_lock(&aa_buffers_lock);
>>    			goto retry;
>>    		}
>>    		pr_warn_once("AppArmor: Failed to allocate a memory buffer.\n");
>> @@ -1666,15 +1704,32 @@ char *aa_get_buffer(bool in_atomic)
>>    void aa_put_buffer(char *buf)
>>    {
>>    	union aa_buffer *aa_buf;
>> +	struct aa_local_cache *cache;
>>    
>>    	if (!buf)
>>    		return;
>>    	aa_buf = container_of(buf, union aa_buffer, buffer[0]);
>>    
>> -	spin_lock(&aa_buffers_lock);
>> -	list_add(&aa_buf->list, &aa_global_buffers);
>> -	buffer_count++;
>> -	spin_unlock(&aa_buffers_lock);
>> +	cache = get_cpu_ptr(&aa_local_buffers);
>> +	if (!cache->hold) {
>> +		put_cpu_ptr(&aa_local_buffers);
>> +		if (spin_trylock(&aa_buffers_lock)) {
>> +			list_add(&aa_buf->list, &aa_global_buffers);
>> +			buffer_count++;
> 
> Given !hold and trylock, right time to drain the perpcu cache?
> 

yes hold is a count of how long (or in this case a count of how many
times) to allocate from the local from the percpu cache before trying
to return to the global buffer pool. When the time/count hits zero
its time to try and return it.

If we succeed the try lock then we succeeded taking the global buffer
pool lock without contention and we can add the buffer back in.

As for the other cases

hold == 0 and fail to grab the lock
- contention is recorded and we add the buffer back to the percpu cache

hold > 0
- decrease hold and add back to the percpu cache

Since we never try and grab the spinlock if hold > 0, the lock variations
do not need to be considered.

>> +			spin_unlock(&aa_buffers_lock);
>> +			cache = get_cpu_ptr(&aa_local_buffers);
>> +			if (cache->contention)
>> +				cache->contention--;
>> +			put_cpu_ptr(&aa_local_buffers);
>> +			return;
>> +		}
>> +		cache = get_cpu_ptr(&aa_local_buffers);
>> +		update_contention(cache);
>> +	}
>> +
>> +	/* cache in percpu list */
>> +	list_add(&aa_buf->list, &cache->head);
>> +	put_cpu_ptr(&aa_local_buffers);
>>    }
>>    
>>    /*
>> @@ -1716,6 +1771,15 @@ static int __init alloc_buffers(void)
>>    	union aa_buffer *aa_buf;
>>    	int i, num;
>>    
>> +	/*
>> +	 * per cpu set of cached allocated buffers used to help reduce
>> +	 * lock contention
>> +	 */
>> +	for_each_possible_cpu(i) {
>> +		per_cpu(aa_local_buffers, i).contention = 0;
>> +		per_cpu(aa_local_buffers, i).hold = 0;
>> +		INIT_LIST_HEAD(&per_cpu(aa_local_buffers, i).head);
>> +	}
>>    	/*
>>    	 * A function may require two buffers at once. Usually the buffers are
>>    	 * used for a short period of time and are shared. On UP kernel buffers
>> -- 
>> 2.34.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: apparmor: global buffers spin lock may get contended
  2022-10-28  9:34 ` John Johansen
@ 2022-10-31  3:52   ` Sergey Senozhatsky
  2022-10-31  3:55     ` John Johansen
  0 siblings, 1 reply; 21+ messages in thread
From: Sergey Senozhatsky @ 2022-10-31  3:52 UTC (permalink / raw)
  To: John Johansen
  Cc: Sergey Senozhatsky, Sebastian Andrzej Siewior, Peter Zijlstra,
	Tomasz Figa, linux-kernel, linux-security-module

On (22/10/28 02:34), John Johansen wrote:
> From d026988196fdbda7234fb87bc3e4aea22edcbaf9 Mon Sep 17 00:00:00 2001
> From: John Johansen <john.johansen@canonical.com>
> Date: Tue, 25 Oct 2022 01:18:41 -0700
> Subject: [PATCH] apparmor: cache buffers on percpu list if there is lock
>  contention
> 
> On a heavily loaded machine there can be lock contention on the
> global buffers lock. Add a percpu list to cache buffers on when
> lock contention is encountered.
> 
> When allocating buffers attempt to use cached buffers first,
> before taking the global buffers lock. When freeing buffers
> try to put them back to the global list but if contention is
> encountered, put the buffer on the percpu list.
> 
> The length of time a buffer is held on the percpu list is dynamically
> adjusted based on lock contention.  The amount of hold time is rapidly
> increased and slow ramped down.
> 
> Signed-off-by: John Johansen <john.johansen@canonical.com>

Reported-by: Sergey Senozhatsky <senozhatsky@chromium.org>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: apparmor: global buffers spin lock may get contended
  2022-10-31  3:52   ` Sergey Senozhatsky
@ 2022-10-31  3:55     ` John Johansen
  2022-10-31  4:04       ` Sergey Senozhatsky
  2023-02-17  0:08       ` [PATCH v3] " John Johansen
  0 siblings, 2 replies; 21+ messages in thread
From: John Johansen @ 2022-10-31  3:55 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Sebastian Andrzej Siewior, Peter Zijlstra, Tomasz Figa,
	linux-kernel, linux-security-module

On 10/30/22 20:52, Sergey Senozhatsky wrote:
> On (22/10/28 02:34), John Johansen wrote:
>>  From d026988196fdbda7234fb87bc3e4aea22edcbaf9 Mon Sep 17 00:00:00 2001
>> From: John Johansen <john.johansen@canonical.com>
>> Date: Tue, 25 Oct 2022 01:18:41 -0700
>> Subject: [PATCH] apparmor: cache buffers on percpu list if there is lock
>>   contention
>>
>> On a heavily loaded machine there can be lock contention on the
>> global buffers lock. Add a percpu list to cache buffers on when
>> lock contention is encountered.
>>
>> When allocating buffers attempt to use cached buffers first,
>> before taking the global buffers lock. When freeing buffers
>> try to put them back to the global list but if contention is
>> encountered, put the buffer on the percpu list.
>>
>> The length of time a buffer is held on the percpu list is dynamically
>> adjusted based on lock contention.  The amount of hold time is rapidly
>> increased and slow ramped down.
>>
>> Signed-off-by: John Johansen <john.johansen@canonical.com>
> 
> Reported-by: Sergey Senozhatsky <senozhatsky@chromium.org>

yep, thanks for catching that


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: apparmor: global buffers spin lock may get contended
  2022-10-31  3:55     ` John Johansen
@ 2022-10-31  4:04       ` Sergey Senozhatsky
  2023-02-17  0:03         ` John Johansen
  2023-02-17  0:08       ` [PATCH v3] " John Johansen
  1 sibling, 1 reply; 21+ messages in thread
From: Sergey Senozhatsky @ 2022-10-31  4:04 UTC (permalink / raw)
  To: John Johansen
  Cc: Sergey Senozhatsky, Sebastian Andrzej Siewior, Peter Zijlstra,
	Tomasz Figa, linux-kernel, linux-security-module

On (22/10/30 20:55), John Johansen wrote:
> On 10/30/22 20:52, Sergey Senozhatsky wrote:
> > On (22/10/28 02:34), John Johansen wrote:
> > >  From d026988196fdbda7234fb87bc3e4aea22edcbaf9 Mon Sep 17 00:00:00 2001
> > > From: John Johansen <john.johansen@canonical.com>
> > > Date: Tue, 25 Oct 2022 01:18:41 -0700
> > > Subject: [PATCH] apparmor: cache buffers on percpu list if there is lock
> > >   contention
> > > 
> > > On a heavily loaded machine there can be lock contention on the
> > > global buffers lock. Add a percpu list to cache buffers on when
> > > lock contention is encountered.
> > > 
> > > When allocating buffers attempt to use cached buffers first,
> > > before taking the global buffers lock. When freeing buffers
> > > try to put them back to the global list but if contention is
> > > encountered, put the buffer on the percpu list.
> > > 
> > > The length of time a buffer is held on the percpu list is dynamically
> > > adjusted based on lock contention.  The amount of hold time is rapidly
> > > increased and slow ramped down.
> > > 
> > > Signed-off-by: John Johansen <john.johansen@canonical.com>
> > 
> > Reported-by: Sergey Senozhatsky <senozhatsky@chromium.org>
> 
> yep, thanks for catching that

Thanks for the patch! Unfortunately it'll be a bit difficult to test
it right now; I'll probably have to wait until corp pushes new kernel
(with the patch) to build boxes.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: apparmor: global buffers spin lock may get contended
  2022-10-31  4:04       ` Sergey Senozhatsky
@ 2023-02-17  0:03         ` John Johansen
  0 siblings, 0 replies; 21+ messages in thread
From: John Johansen @ 2023-02-17  0:03 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Sebastian Andrzej Siewior, Peter Zijlstra, Tomasz Figa,
	linux-kernel, linux-security-module

I have sent up a new version of this patch that caps a situation where buffer lists could grow unbounded (at theoretically).


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v3] apparmor: global buffers spin lock may get contended
  2022-10-31  3:55     ` John Johansen
  2022-10-31  4:04       ` Sergey Senozhatsky
@ 2023-02-17  0:08       ` John Johansen
  2023-02-17 10:44         ` Sebastian Andrzej Siewior
  1 sibling, 1 reply; 21+ messages in thread
From: John Johansen @ 2023-02-17  0:08 UTC (permalink / raw)
  To: LKLM
  Cc: Sergey Senozhatsky, Sebastian Andrzej Siewior, Peter Zijlstra,
	Tomasz Figa, linux-security-module, Anil Altinay

 From f44dee132b0b55386b7ea31e68c80d367b073ee0 Mon Sep 17 00:00:00 2001
From: John Johansen <john.johansen@canonical.com>
Date: Tue, 25 Oct 2022 01:18:41 -0700
Subject: [PATCH] apparmor: cache buffers on percpu list if there is lock
  contention

On a heavily loaded machine there can be lock contention on the
global buffers lock. Add a percpu list to cache buffers on when
lock contention is encountered.

When allocating buffers attempt to use cached buffers first,
before taking the global buffers lock. When freeing buffers
try to put them back to the global list but if contention is
encountered, put the buffer on the percpu list.

The length of time a buffer is held on the percpu list is dynamically
adjusted based on lock contention.  The amount of hold time is rapidly
increased and slow ramped down.

v3:
- limit number of buffers that can be pushed onto the percpu
   list. This avoids a problem on some kernels where one percpu
   list can inherit buffers from another cpu after a reschedule,
   causing more kernel memory to used than is necessary. Under
   normal conditions this should eventually return to normal
   but under pathelogical conditions the extra memory consumption
   may have been unbouanded
v2:
- dynamically adjust buffer hold time on percpu list based on
   lock contention.
v1:
- cache buffers on percpu list on lock contention

Signed-off-by: John Johansen <john.johansen@canonical.com>
---
  security/apparmor/lsm.c | 81 ++++++++++++++++++++++++++++++++++++++---
  1 file changed, 76 insertions(+), 5 deletions(-)

diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
index 25114735bc11..21f5ea20e715 100644
--- a/security/apparmor/lsm.c
+++ b/security/apparmor/lsm.c
@@ -49,12 +49,19 @@ union aa_buffer {
  	char buffer[1];
  };
  
+struct aa_local_cache {
+	unsigned int contention;
+	unsigned int hold;
+	struct list_head head;
+};
+
  #define RESERVE_COUNT 2
  static int reserve_count = RESERVE_COUNT;
  static int buffer_count;
  
  static LIST_HEAD(aa_global_buffers);
  static DEFINE_SPINLOCK(aa_buffers_lock);
+static DEFINE_PER_CPU(struct aa_local_cache, aa_local_buffers);
  
  /*
   * LSM hook functions
@@ -1622,14 +1629,44 @@ static int param_set_mode(const char *val, const struct kernel_param *kp)
  	return 0;
  }
  
+static void update_contention(struct aa_local_cache *cache)
+{
+	cache->contention += 3;
+	if (cache->contention > 9)
+		cache->contention = 9;
+	cache->hold += 1 << cache->contention;		/* 8, 64, 512 */
+}
+
  char *aa_get_buffer(bool in_atomic)
  {
  	union aa_buffer *aa_buf;
+	struct aa_local_cache *cache;
  	bool try_again = true;
  	gfp_t flags = (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
  
+	/* use per cpu cached buffers first */
+	cache = get_cpu_ptr(&aa_local_buffers);
+	if (!list_empty(&cache->head)) {
+		aa_buf = list_first_entry(&cache->head, union aa_buffer, list);
+		list_del(&aa_buf->list);
+		cache->hold--;
+		put_cpu_ptr(&aa_local_buffers);
+		return &aa_buf->buffer[0];
+	}
+	put_cpu_ptr(&aa_local_buffers);
+
+	if (!spin_trylock(&aa_buffers_lock)) {
+		cache = get_cpu_ptr(&aa_local_buffers);
+		update_contention(cache);
+		put_cpu_ptr(&aa_local_buffers);
+		spin_lock(&aa_buffers_lock);
+	} else {
+		cache = get_cpu_ptr(&aa_local_buffers);
+		if (cache->contention)
+			cache->contention--;
+		put_cpu_ptr(&aa_local_buffers);
+	}
  retry:
-	spin_lock(&aa_buffers_lock);
  	if (buffer_count > reserve_count ||
  	    (in_atomic && !list_empty(&aa_global_buffers))) {
  		aa_buf = list_first_entry(&aa_global_buffers, union aa_buffer,
@@ -1655,6 +1692,7 @@ char *aa_get_buffer(bool in_atomic)
  	if (!aa_buf) {
  		if (try_again) {
  			try_again = false;
+			spin_lock(&aa_buffers_lock);
  			goto retry;
  		}
  		pr_warn_once("AppArmor: Failed to allocate a memory buffer.\n");
@@ -1666,15 +1704,39 @@ char *aa_get_buffer(bool in_atomic)
  void aa_put_buffer(char *buf)
  {
  	union aa_buffer *aa_buf;
+	struct aa_local_cache *cache;
  
  	if (!buf)
  		return;
  	aa_buf = container_of(buf, union aa_buffer, buffer[0]);
  
-	spin_lock(&aa_buffers_lock);
-	list_add(&aa_buf->list, &aa_global_buffers);
-	buffer_count++;
-	spin_unlock(&aa_buffers_lock);
+	cache = get_cpu_ptr(&aa_local_buffers);
+	if (!cache->hold || cache->count >= 2) {
+		put_cpu_ptr(&aa_local_buffers);
+		if (spin_trylock(&aa_buffers_lock)) {
+		locked:
+			list_add(&aa_buf->list, &aa_global_buffers);
+			buffer_count++;
+			spin_unlock(&aa_buffers_lock);
+			cache = get_cpu_ptr(&aa_local_buffers);
+			if (cache->contention)
+				cache->contention--;
+			put_cpu_ptr(&aa_local_buffers);
+			return;
+		}
+		cache = get_cpu_ptr(&aa_local_buffers);
+		update_contention(cache);
+		if (cache->count >= 2) {
+			put_cpu_ptr(&aa_local_buffers);
+			spin_lock(&aa_buffers_lock);
+			/* force putting the buffer to global */
+			goto locked;
+		}
+	}
+
+	/* cache in percpu list */
+	list_add(&aa_buf->list, &cache->head);
+	put_cpu_ptr(&aa_local_buffers);
  }
  
  /*
@@ -1716,6 +1778,15 @@ static int __init alloc_buffers(void)
  	union aa_buffer *aa_buf;
  	int i, num;
  
+	/*
+	 * per cpu set of cached allocated buffers used to help reduce
+	 * lock contention
+	 */
+	for_each_possible_cpu(i) {
+		per_cpu(aa_local_buffers, i).contention = 0;
+		per_cpu(aa_local_buffers, i).hold = 0;
+		INIT_LIST_HEAD(&per_cpu(aa_local_buffers, i).head);
+	}
  	/*
  	 * A function may require two buffers at once. Usually the buffers are
  	 * used for a short period of time and are shared. On UP kernel buffers
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v3] apparmor: global buffers spin lock may get contended
  2023-02-17  0:08       ` [PATCH v3] " John Johansen
@ 2023-02-17 10:44         ` Sebastian Andrzej Siewior
  2023-02-20  8:42           ` John Johansen
  0 siblings, 1 reply; 21+ messages in thread
From: Sebastian Andrzej Siewior @ 2023-02-17 10:44 UTC (permalink / raw)
  To: John Johansen
  Cc: LKLM, Sergey Senozhatsky, Peter Zijlstra, Tomasz Figa,
	linux-security-module, Anil Altinay

On 2023-02-16 16:08:10 [-0800], John Johansen wrote:
> --- a/security/apparmor/lsm.c
> +++ b/security/apparmor/lsm.c
> @@ -49,12 +49,19 @@ union aa_buffer {
>  	char buffer[1];
>  };
> +struct aa_local_cache {
> +	unsigned int contention;
> +	unsigned int hold;
> +	struct list_head head;
> +};

if you stick a local_lock_t into that struct, then you could replace
	cache = get_cpu_ptr(&aa_local_buffers);
with
	local_lock(&aa_local_buffers.lock);
	cache = this_cpu_ptr(&aa_local_buffers);

You would get the preempt_disable() based locking for the per-CPU
variable (as with get_cpu_ptr()) and additionally some lockdep
validation which would warn if it is used outside of task context (IRQ).

I didn't parse completely the hold/contention logic but it seems to work
;)
You check "cache->count >=  2" twice but I don't see an inc/ dec of it
nor is it part of aa_local_cache.

I can't parse how many items can end up on the local list if the global
list is locked. My guess would be more than 2 due the ->hold parameter.

Do you have any numbers on the machine and performance it improved? It
sure will be a good selling point.

Sebastian

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3] apparmor: global buffers spin lock may get contended
  2023-02-17 10:44         ` Sebastian Andrzej Siewior
@ 2023-02-20  8:42           ` John Johansen
  2023-02-21 21:27             ` Anil Altinay
  0 siblings, 1 reply; 21+ messages in thread
From: John Johansen @ 2023-02-20  8:42 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: LKLM, Sergey Senozhatsky, Peter Zijlstra, Tomasz Figa,
	linux-security-module, Anil Altinay

On 2/17/23 02:44, Sebastian Andrzej Siewior wrote:
> On 2023-02-16 16:08:10 [-0800], John Johansen wrote:
>> --- a/security/apparmor/lsm.c
>> +++ b/security/apparmor/lsm.c
>> @@ -49,12 +49,19 @@ union aa_buffer {
>>   	char buffer[1];
>>   };
>> +struct aa_local_cache {
>> +	unsigned int contention;
>> +	unsigned int hold;
>> +	struct list_head head;
>> +};
> 
> if you stick a local_lock_t into that struct, then you could replace
> 	cache = get_cpu_ptr(&aa_local_buffers);
> with
> 	local_lock(&aa_local_buffers.lock);
> 	cache = this_cpu_ptr(&aa_local_buffers);
> 
> You would get the preempt_disable() based locking for the per-CPU
> variable (as with get_cpu_ptr()) and additionally some lockdep
> validation which would warn if it is used outside of task context (IRQ).
> 
I did look at local_locks and there was a reason I didn't use them. I
can't recall as the original iteration of this is over a year old now.
I will have to dig into it again.

> I didn't parse completely the hold/contention logic but it seems to work
> ;)
> You check "cache->count >=  2" twice but I don't see an inc/ dec of it
> nor is it part of aa_local_cache.
> 
sadly I messed up the reordering of this and the debug patch. This will be
fixed in v4.

> I can't parse how many items can end up on the local list if the global
> list is locked. My guess would be more than 2 due the ->hold parameter.
> 
So this iteration, forces pushing back to global list if there are already
two on the local list. The hold parameter just affects how long the
buffers remain on the local list, before trying to place them back on
the global list.

Originally before the count was added more than 2 buffers could end up
on the local list, and having too many local buffers is a waste of
memory. The count got added to address this. The value of 2 (which should
be switched to a define) was chosen because no mediation routine currently
uses more than 2 buffers.

Note that this doesn't mean that more than two buffers can be allocated
to a tasks on a cpu. Its possible in some cases to have a task have
allocated buffers and to still have buffers on the local cache list.

> Do you have any numbers on the machine and performance it improved? It
> sure will be a good selling point.
> 

I can include some supporting info, for a 16 core machine. But it will
take some time to for me to get access to a bigger machine, where this
is much more important. Hence the call for some of the other people
on this thread to test.

thanks for the feedback


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3] apparmor: global buffers spin lock may get contended
  2023-02-20  8:42           ` John Johansen
@ 2023-02-21 21:27             ` Anil Altinay
  2023-06-26 23:35               ` Anil Altinay
       [not found]               ` <CACCxZWO-+M-J_enENr7q1WDcu1U8vYFoytqJxAh=x-nuP268zA@mail.gmail.com>
  0 siblings, 2 replies; 21+ messages in thread
From: Anil Altinay @ 2023-02-21 21:27 UTC (permalink / raw)
  To: John Johansen
  Cc: Sebastian Andrzej Siewior, LKLM, Sergey Senozhatsky,
	Peter Zijlstra, Tomasz Figa, linux-security-module

I can test the patch with 5.10 and 5.15 kernels in different machines.
Just let me know which machine types you would like me to test.

On Mon, Feb 20, 2023 at 12:42 AM John Johansen
<john.johansen@canonical.com> wrote:
>
> On 2/17/23 02:44, Sebastian Andrzej Siewior wrote:
> > On 2023-02-16 16:08:10 [-0800], John Johansen wrote:
> >> --- a/security/apparmor/lsm.c
> >> +++ b/security/apparmor/lsm.c
> >> @@ -49,12 +49,19 @@ union aa_buffer {
> >>      char buffer[1];
> >>   };
> >> +struct aa_local_cache {
> >> +    unsigned int contention;
> >> +    unsigned int hold;
> >> +    struct list_head head;
> >> +};
> >
> > if you stick a local_lock_t into that struct, then you could replace
> >       cache = get_cpu_ptr(&aa_local_buffers);
> > with
> >       local_lock(&aa_local_buffers.lock);
> >       cache = this_cpu_ptr(&aa_local_buffers);
> >
> > You would get the preempt_disable() based locking for the per-CPU
> > variable (as with get_cpu_ptr()) and additionally some lockdep
> > validation which would warn if it is used outside of task context (IRQ).
> >
> I did look at local_locks and there was a reason I didn't use them. I
> can't recall as the original iteration of this is over a year old now.
> I will have to dig into it again.
>
> > I didn't parse completely the hold/contention logic but it seems to work
> > ;)
> > You check "cache->count >=  2" twice but I don't see an inc/ dec of it
> > nor is it part of aa_local_cache.
> >
> sadly I messed up the reordering of this and the debug patch. This will be
> fixed in v4.
>
> > I can't parse how many items can end up on the local list if the global
> > list is locked. My guess would be more than 2 due the ->hold parameter.
> >
> So this iteration, forces pushing back to global list if there are already
> two on the local list. The hold parameter just affects how long the
> buffers remain on the local list, before trying to place them back on
> the global list.
>
> Originally before the count was added more than 2 buffers could end up
> on the local list, and having too many local buffers is a waste of
> memory. The count got added to address this. The value of 2 (which should
> be switched to a define) was chosen because no mediation routine currently
> uses more than 2 buffers.
>
> Note that this doesn't mean that more than two buffers can be allocated
> to a tasks on a cpu. Its possible in some cases to have a task have
> allocated buffers and to still have buffers on the local cache list.
>
> > Do you have any numbers on the machine and performance it improved? It
> > sure will be a good selling point.
> >
>
> I can include some supporting info, for a 16 core machine. But it will
> take some time to for me to get access to a bigger machine, where this
> is much more important. Hence the call for some of the other people
> on this thread to test.
>
> thanks for the feedback
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3] apparmor: global buffers spin lock may get contended
  2023-02-21 21:27             ` Anil Altinay
@ 2023-06-26 23:35               ` Anil Altinay
       [not found]               ` <CACCxZWO-+M-J_enENr7q1WDcu1U8vYFoytqJxAh=x-nuP268zA@mail.gmail.com>
  1 sibling, 0 replies; 21+ messages in thread
From: Anil Altinay @ 2023-06-26 23:35 UTC (permalink / raw)
  To: John Johansen
  Cc: Sebastian Andrzej Siewior, LKLM, Sergey Senozhatsky,
	Peter Zijlstra, Tomasz Figa, linux-security-module

Hi John,

I was wondering if you get a chance to work on patch v4. Please let me
know if you need help with testing.

Best,
Anil


On Tue, Feb 21, 2023 at 1:27 PM Anil Altinay <aaltinay@google.com> wrote:
>
> I can test the patch with 5.10 and 5.15 kernels in different machines.
> Just let me know which machine types you would like me to test.
>
> On Mon, Feb 20, 2023 at 12:42 AM John Johansen
> <john.johansen@canonical.com> wrote:
> >
> > On 2/17/23 02:44, Sebastian Andrzej Siewior wrote:
> > > On 2023-02-16 16:08:10 [-0800], John Johansen wrote:
> > >> --- a/security/apparmor/lsm.c
> > >> +++ b/security/apparmor/lsm.c
> > >> @@ -49,12 +49,19 @@ union aa_buffer {
> > >>      char buffer[1];
> > >>   };
> > >> +struct aa_local_cache {
> > >> +    unsigned int contention;
> > >> +    unsigned int hold;
> > >> +    struct list_head head;
> > >> +};
> > >
> > > if you stick a local_lock_t into that struct, then you could replace
> > >       cache = get_cpu_ptr(&aa_local_buffers);
> > > with
> > >       local_lock(&aa_local_buffers.lock);
> > >       cache = this_cpu_ptr(&aa_local_buffers);
> > >
> > > You would get the preempt_disable() based locking for the per-CPU
> > > variable (as with get_cpu_ptr()) and additionally some lockdep
> > > validation which would warn if it is used outside of task context (IRQ).
> > >
> > I did look at local_locks and there was a reason I didn't use them. I
> > can't recall as the original iteration of this is over a year old now.
> > I will have to dig into it again.
> >
> > > I didn't parse completely the hold/contention logic but it seems to work
> > > ;)
> > > You check "cache->count >=  2" twice but I don't see an inc/ dec of it
> > > nor is it part of aa_local_cache.
> > >
> > sadly I messed up the reordering of this and the debug patch. This will be
> > fixed in v4.
> >
> > > I can't parse how many items can end up on the local list if the global
> > > list is locked. My guess would be more than 2 due the ->hold parameter.
> > >
> > So this iteration, forces pushing back to global list if there are already
> > two on the local list. The hold parameter just affects how long the
> > buffers remain on the local list, before trying to place them back on
> > the global list.
> >
> > Originally before the count was added more than 2 buffers could end up
> > on the local list, and having too many local buffers is a waste of
> > memory. The count got added to address this. The value of 2 (which should
> > be switched to a define) was chosen because no mediation routine currently
> > uses more than 2 buffers.
> >
> > Note that this doesn't mean that more than two buffers can be allocated
> > to a tasks on a cpu. Its possible in some cases to have a task have
> > allocated buffers and to still have buffers on the local cache list.
> >
> > > Do you have any numbers on the machine and performance it improved? It
> > > sure will be a good selling point.
> > >
> >
> > I can include some supporting info, for a 16 core machine. But it will
> > take some time to for me to get access to a bigger machine, where this
> > is much more important. Hence the call for some of the other people
> > on this thread to test.
> >
> > thanks for the feedback
> >

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3] apparmor: global buffers spin lock may get contended
       [not found]               ` <CACCxZWO-+M-J_enENr7q1WDcu1U8vYFoytqJxAh=x-nuP268zA@mail.gmail.com>
@ 2023-06-27  0:31                 ` John Johansen
  2023-10-06  4:18                   ` Sergey Senozhatsky
  0 siblings, 1 reply; 21+ messages in thread
From: John Johansen @ 2023-06-27  0:31 UTC (permalink / raw)
  To: Anil Altinay
  Cc: Sebastian Andrzej Siewior, LKLM, Sergey Senozhatsky,
	Peter Zijlstra, Tomasz Figa, linux-security-module

On 6/26/23 16:33, Anil Altinay wrote:
> Hi John,
> 
> I was wondering if you get a chance to work on patch v4. Please let me know if you need help with testing.
> 

yeah, testing help is always much appreciated. I have a v4, and I am working on 3 alternate version to compare against, to help give a better sense if we can get away with simplifying or tweak the scaling. I should be able to post them out some time tonight.

> Best,
> Anil
> 
> On Tue, Feb 21, 2023 at 1:27 PM Anil Altinay <aaltinay@google.com <mailto:aaltinay@google.com>> wrote:
> 
>     I can test the patch with 5.10 and 5.15 kernels in different machines.
>     Just let me know which machine types you would like me to test.
> 
>     On Mon, Feb 20, 2023 at 12:42 AM John Johansen
>     <john.johansen@canonical.com <mailto:john.johansen@canonical.com>> wrote:
>      >
>      > On 2/17/23 02:44, Sebastian Andrzej Siewior wrote:
>      > > On 2023-02-16 16:08:10 [-0800], John Johansen wrote:
>      > >> --- a/security/apparmor/lsm.c
>      > >> +++ b/security/apparmor/lsm.c
>      > >> @@ -49,12 +49,19 @@ union aa_buffer {
>      > >>      char buffer[1];
>      > >>   };
>      > >> +struct aa_local_cache {
>      > >> +    unsigned int contention;
>      > >> +    unsigned int hold;
>      > >> +    struct list_head head;
>      > >> +};
>      > >
>      > > if you stick a local_lock_t into that struct, then you could replace
>      > >       cache = get_cpu_ptr(&aa_local_buffers);
>      > > with
>      > >       local_lock(&aa_local_buffers.lock);
>      > >       cache = this_cpu_ptr(&aa_local_buffers);
>      > >
>      > > You would get the preempt_disable() based locking for the per-CPU
>      > > variable (as with get_cpu_ptr()) and additionally some lockdep
>      > > validation which would warn if it is used outside of task context (IRQ).
>      > >
>      > I did look at local_locks and there was a reason I didn't use them. I
>      > can't recall as the original iteration of this is over a year old now.
>      > I will have to dig into it again.
>      >
>      > > I didn't parse completely the hold/contention logic but it seems to work
>      > > ;)
>      > > You check "cache->count >=  2" twice but I don't see an inc/ dec of it
>      > > nor is it part of aa_local_cache.
>      > >
>      > sadly I messed up the reordering of this and the debug patch. This will be
>      > fixed in v4.
>      >
>      > > I can't parse how many items can end up on the local list if the global
>      > > list is locked. My guess would be more than 2 due the ->hold parameter.
>      > >
>      > So this iteration, forces pushing back to global list if there are already
>      > two on the local list. The hold parameter just affects how long the
>      > buffers remain on the local list, before trying to place them back on
>      > the global list.
>      >
>      > Originally before the count was added more than 2 buffers could end up
>      > on the local list, and having too many local buffers is a waste of
>      > memory. The count got added to address this. The value of 2 (which should
>      > be switched to a define) was chosen because no mediation routine currently
>      > uses more than 2 buffers.
>      >
>      > Note that this doesn't mean that more than two buffers can be allocated
>      > to a tasks on a cpu. Its possible in some cases to have a task have
>      > allocated buffers and to still have buffers on the local cache list.
>      >
>      > > Do you have any numbers on the machine and performance it improved? It
>      > > sure will be a good selling point.
>      > >
>      >
>      > I can include some supporting info, for a 16 core machine. But it will
>      > take some time to for me to get access to a bigger machine, where this
>      > is much more important. Hence the call for some of the other people
>      > on this thread to test.
>      >
>      > thanks for the feedback
>      >
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3] apparmor: global buffers spin lock may get contended
  2023-06-27  0:31                 ` John Johansen
@ 2023-10-06  4:18                   ` Sergey Senozhatsky
  2023-10-17  9:21                     ` [PATCH v5 0/4] apparmor: cache buffers on percpu list if there is lock, contention John Johansen
  0 siblings, 1 reply; 21+ messages in thread
From: Sergey Senozhatsky @ 2023-10-06  4:18 UTC (permalink / raw)
  To: John Johansen
  Cc: Anil Altinay, Sebastian Andrzej Siewior, LKLM,
	Sergey Senozhatsky, Peter Zijlstra, Tomasz Figa,
	linux-security-module

On (23/06/26 17:31), John Johansen wrote:
> On 6/26/23 16:33, Anil Altinay wrote:
> > Hi John,
> > 
> > I was wondering if you get a chance to work on patch v4. Please let me know if you need help with testing.
> > 
> 
> yeah, testing help is always much appreciated. I have a v4, and I am
> working on 3 alternate version to compare against, to help give a better
> sense if we can get away with simplifying or tweak the scaling.
>
> I should be able to post them out some time tonight.

Hi John,

Did you get a chance to post v4? I may be able to give it some testing
on our real-life case.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v5 0/4] apparmor: cache buffers on percpu list if there is lock, contention
  2023-10-06  4:18                   ` Sergey Senozhatsky
@ 2023-10-17  9:21                     ` John Johansen
  2023-10-17  9:23                       ` [PATCH v5 1/4] " John Johansen
                                         ` (4 more replies)
  0 siblings, 5 replies; 21+ messages in thread
From: John Johansen @ 2023-10-17  9:21 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Anil Altinay, Sebastian Andrzej Siewior, LKLM, Peter Zijlstra,
	Tomasz Figa, linux-security-module, John Johansen

On 10/5/23 21:18, Sergey Senozhatsky wrote:
> On (23/06/26 17:31), John Johansen wrote:
>> On 6/26/23 16:33, Anil Altinay wrote:
>>> Hi John,
>>>
>>> I was wondering if you get a chance to work on patch v4. Please let me know if you need help with testing.
>>>
>>
>> yeah, testing help is always much appreciated. I have a v4, and I am
>> working on 3 alternate version to compare against, to help give a better
>> sense if we can get away with simplifying or tweak the scaling.
>>
>> I should be able to post them out some time tonight.
> 
> Hi John,
> 
> Did you get a chance to post v4? I may be able to give it some testing
> on our real-life case.

sorry yes, how about a v5. That is simplified with 3 follow on patches
that aren't strictly necessary, but some combination of them might be
better than just the base patch, but splitting them out makes the
individual changes easier to review.

---


df323337e507 ("apparmor: Use a memory pool instead per-CPU caches")
changed buffer allocation to use a memory pool, however on a heavily
loaded machine there can be lock contention on the global buffers
lock. Add a percpu list to cache buffers on when lock contention is
encountered.

When allocating buffers attempt to use cached buffers first,
before taking the global buffers lock. When freeing buffers
try to put them back to the global list but if contention is
encountered, put the buffer on the percpu list.

The length of time a buffer is held on the percpu list is dynamically
adjusted based on lock contention.

v5:
- simplify base patch by removing: improvements can be added later
   - MAX_LOCAL and must lock
   - contention scaling.
v4:
- fix percpu ->count buffer count which had been spliced across a
   debug patch.
- introduce define for MAX_LOCAL_COUNT
- rework count check and locking around it.
- update commit message to reference commit that introduced the
   memory.
v3:
- limit number of buffers that can be pushed onto the percpu
   list. This avoids a problem on some kernels where one percpu
   list can inherit buffers from another cpu after a reschedule,
   causing more kernel memory to used than is necessary. Under
   normal conditions this should eventually return to normal
   but under pathelogical conditions the extra memory consumption
   may have been unbouanded
v2:
- dynamically adjust buffer hold time on percpu list based on
   lock contention.
v1:
- cache buffers on percpu list on lock contention

Reported-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Signed-off-by: John Johansen <john.johansen@canonical.com>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 1/4] apparmor: cache buffers on percpu list if there is lock, contention
  2023-10-17  9:21                     ` [PATCH v5 0/4] apparmor: cache buffers on percpu list if there is lock, contention John Johansen
@ 2023-10-17  9:23                       ` John Johansen
  2023-10-17  9:24                       ` [PATCH v5 2/4] apparmor: exponential backoff on cache buffer contention John Johansen
                                         ` (3 subsequent siblings)
  4 siblings, 0 replies; 21+ messages in thread
From: John Johansen @ 2023-10-17  9:23 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Anil Altinay, Sebastian Andrzej Siewior, LKLM, Peter Zijlstra,
	Tomasz Figa, linux-security-module, John Johansen

df323337e507 ("apparmor: Use a memory pool instead per-CPU caches")
changed buffer allocation to use a memory pool, however on a heavily
loaded machine there can be lock contention on the global buffers
lock. Add a percpu list to cache buffers on when lock contention is
encountered.

When allocating buffers attempt to use cached buffers first,
before taking the global buffers lock. When freeing buffers
try to put them back to the global list but if contention is
encountered, put the buffer on the percpu list.

The length of time a buffer is held on the percpu list is dynamically
adjusted based on lock contention.  The amount of hold time is
increased and decreased linearly.

Reported-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Signed-off-by: John Johansen <john.johansen@canonical.com>
---
  security/apparmor/lsm.c | 67 ++++++++++++++++++++++++++++++++++++++---
  1 file changed, 62 insertions(+), 5 deletions(-)

diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
index c80c1bd3024a..ce4f3e7a784d 100644
--- a/security/apparmor/lsm.c
+++ b/security/apparmor/lsm.c
@@ -49,12 +49,19 @@ union aa_buffer {
  	DECLARE_FLEX_ARRAY(char, buffer);
  };
  
+struct aa_local_cache {
+	unsigned int hold;
+	unsigned int count;
+	struct list_head head;
+};
+
  #define RESERVE_COUNT 2
  static int reserve_count = RESERVE_COUNT;
  static int buffer_count;
  
  static LIST_HEAD(aa_global_buffers);
  static DEFINE_SPINLOCK(aa_buffers_lock);
+static DEFINE_PER_CPU(struct aa_local_cache, aa_local_buffers);
  
  /*
   * LSM hook functions
@@ -1789,11 +1796,32 @@ static int param_set_mode(const char *val, const struct kernel_param *kp)
  char *aa_get_buffer(bool in_atomic)
  {
  	union aa_buffer *aa_buf;
+	struct aa_local_cache *cache;
  	bool try_again = true;
  	gfp_t flags = (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
  
+	/* use per cpu cached buffers first */
+	cache = get_cpu_ptr(&aa_local_buffers);
+	if (!list_empty(&cache->head)) {
+		aa_buf = list_first_entry(&cache->head, union aa_buffer, list);
+		list_del(&aa_buf->list);
+		cache->hold--;
+		cache->count--;
+		put_cpu_ptr(&aa_local_buffers);
+		return &aa_buf->buffer[0];
+	}
+	put_cpu_ptr(&aa_local_buffers);
+
+	if (!spin_trylock(&aa_buffers_lock)) {
+		cache = get_cpu_ptr(&aa_local_buffers);
+		cache->hold += 1;
+		put_cpu_ptr(&aa_local_buffers);
+		spin_lock(&aa_buffers_lock);
+	} else {
+		cache = get_cpu_ptr(&aa_local_buffers);
+		put_cpu_ptr(&aa_local_buffers);
+	}
  retry:
-	spin_lock(&aa_buffers_lock);
  	if (buffer_count > reserve_count ||
  	    (in_atomic && !list_empty(&aa_global_buffers))) {
  		aa_buf = list_first_entry(&aa_global_buffers, union aa_buffer,
@@ -1819,6 +1847,7 @@ char *aa_get_buffer(bool in_atomic)
  	if (!aa_buf) {
  		if (try_again) {
  			try_again = false;
+			spin_lock(&aa_buffers_lock);
  			goto retry;
  		}
  		pr_warn_once("AppArmor: Failed to allocate a memory buffer.\n");
@@ -1830,15 +1859,34 @@ char *aa_get_buffer(bool in_atomic)
  void aa_put_buffer(char *buf)
  {
  	union aa_buffer *aa_buf;
+	struct aa_local_cache *cache;
  
  	if (!buf)
  		return;
  	aa_buf = container_of(buf, union aa_buffer, buffer[0]);
  
-	spin_lock(&aa_buffers_lock);
-	list_add(&aa_buf->list, &aa_global_buffers);
-	buffer_count++;
-	spin_unlock(&aa_buffers_lock);
+	cache = get_cpu_ptr(&aa_local_buffers);
+	if (!cache->hold) {
+		put_cpu_ptr(&aa_local_buffers);
+
+		if (spin_trylock(&aa_buffers_lock)) {
+			/* put back on global list */
+			list_add(&aa_buf->list, &aa_global_buffers);
+			buffer_count++;
+			spin_unlock(&aa_buffers_lock);
+			cache = get_cpu_ptr(&aa_local_buffers);
+			put_cpu_ptr(&aa_local_buffers);
+			return;
+		}
+		/* contention on global list, fallback to percpu */
+		cache = get_cpu_ptr(&aa_local_buffers);
+		cache->hold += 1;
+	}
+
+	/* cache in percpu list */
+	list_add(&aa_buf->list, &cache->head);
+	cache->count++;
+	put_cpu_ptr(&aa_local_buffers);
  }
  
  /*
@@ -1880,6 +1928,15 @@ static int __init alloc_buffers(void)
  	union aa_buffer *aa_buf;
  	int i, num;
  
+	/*
+	 * per cpu set of cached allocated buffers used to help reduce
+	 * lock contention
+	 */
+	for_each_possible_cpu(i) {
+		per_cpu(aa_local_buffers, i).hold = 0;
+		per_cpu(aa_local_buffers, i).count = 0;
+		INIT_LIST_HEAD(&per_cpu(aa_local_buffers, i).head);
+	}
  	/*
  	 * A function may require two buffers at once. Usually the buffers are
  	 * used for a short period of time and are shared. On UP kernel buffers
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 2/4] apparmor: exponential backoff on cache buffer contention
  2023-10-17  9:21                     ` [PATCH v5 0/4] apparmor: cache buffers on percpu list if there is lock, contention John Johansen
  2023-10-17  9:23                       ` [PATCH v5 1/4] " John Johansen
@ 2023-10-17  9:24                       ` John Johansen
  2023-10-17  9:25                       ` [PATCH v5 3/4] apparmor: experiment with faster backoff on global buffer John Johansen
                                         ` (2 subsequent siblings)
  4 siblings, 0 replies; 21+ messages in thread
From: John Johansen @ 2023-10-17  9:24 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Anil Altinay, Sebastian Andrzej Siewior, LKLM, Peter Zijlstra,
	Tomasz Figa, linux-security-module, John Johansen

Reduce contention on the global buffers lock by using  an exponential
back off strategy where the amount tries to hold is doubled when
contention is encoutered and backed off linearly when there isn't
contention.

Signed-off-by: John Johansen <john.johansen@canonical.com>
---
  security/apparmor/lsm.c | 18 ++++++++++++++++--
  1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
index ce4f3e7a784d..fd6779ff0da4 100644
--- a/security/apparmor/lsm.c
+++ b/security/apparmor/lsm.c
@@ -50,6 +50,7 @@ union aa_buffer {
  };
  
  struct aa_local_cache {
+	unsigned int contention;
  	unsigned int hold;
  	unsigned int count;
  	struct list_head head;
@@ -1793,6 +1794,14 @@ static int param_set_mode(const char *val, const struct kernel_param *kp)
  	return 0;
  }
  
+static void update_contention(struct aa_local_cache *cache)
+{
+	cache->contention += 1;
+	if (cache->contention > 9)
+		cache->contention = 9;
+	cache->hold += 1 << cache->contention;		/* 2, 4, 8, ... */
+}
+
  char *aa_get_buffer(bool in_atomic)
  {
  	union aa_buffer *aa_buf;
@@ -1814,11 +1823,13 @@ char *aa_get_buffer(bool in_atomic)
  
  	if (!spin_trylock(&aa_buffers_lock)) {
  		cache = get_cpu_ptr(&aa_local_buffers);
-		cache->hold += 1;
+		update_contention(cache);
  		put_cpu_ptr(&aa_local_buffers);
  		spin_lock(&aa_buffers_lock);
  	} else {
  		cache = get_cpu_ptr(&aa_local_buffers);
+		if (cache->contention)
+			cache->contention--;
  		put_cpu_ptr(&aa_local_buffers);
  	}
  retry:
@@ -1875,12 +1886,14 @@ void aa_put_buffer(char *buf)
  			buffer_count++;
  			spin_unlock(&aa_buffers_lock);
  			cache = get_cpu_ptr(&aa_local_buffers);
+			if (cache->contention)
+				cache->contention--;
  			put_cpu_ptr(&aa_local_buffers);
  			return;
  		}
  		/* contention on global list, fallback to percpu */
  		cache = get_cpu_ptr(&aa_local_buffers);
-		cache->hold += 1;
+		update_contention(cache);
  	}
  
  	/* cache in percpu list */
@@ -1933,6 +1946,7 @@ static int __init alloc_buffers(void)
  	 * lock contention
  	 */
  	for_each_possible_cpu(i) {
+		per_cpu(aa_local_buffers, i).contention = 0;
  		per_cpu(aa_local_buffers, i).hold = 0;
  		per_cpu(aa_local_buffers, i).count = 0;
  		INIT_LIST_HEAD(&per_cpu(aa_local_buffers, i).head);
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 3/4] apparmor: experiment with faster backoff on global buffer
  2023-10-17  9:21                     ` [PATCH v5 0/4] apparmor: cache buffers on percpu list if there is lock, contention John Johansen
  2023-10-17  9:23                       ` [PATCH v5 1/4] " John Johansen
  2023-10-17  9:24                       ` [PATCH v5 2/4] apparmor: exponential backoff on cache buffer contention John Johansen
@ 2023-10-17  9:25                       ` John Johansen
  2023-10-17  9:26                       ` [PATCH v5 4/4] apparmor: limit the number of buffers in percpu cache John Johansen
  2023-10-26  5:13                       ` [PATCH v5 0/4] apparmor: cache buffers on percpu list if there is lock, contention Sergey Senozhatsky
  4 siblings, 0 replies; 21+ messages in thread
From: John Johansen @ 2023-10-17  9:25 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Anil Altinay, Sebastian Andrzej Siewior, LKLM, Peter Zijlstra,
	Tomasz Figa, linux-security-module, John Johansen

Instead of doubling hold count when contention is encounter increase
it by 8x. This makes for a faster back off, but results in buffers
being held longer.

Signed-off-by: John Johansen <john.johansen@canonical.com>
---
  security/apparmor/lsm.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
index fd6779ff0da4..52423d88854a 100644
--- a/security/apparmor/lsm.c
+++ b/security/apparmor/lsm.c
@@ -1796,10 +1796,10 @@ static int param_set_mode(const char *val, const struct kernel_param *kp)
  
  static void update_contention(struct aa_local_cache *cache)
  {
-	cache->contention += 1;
+	cache->contention += 3;
  	if (cache->contention > 9)
  		cache->contention = 9;
-	cache->hold += 1 << cache->contention;		/* 2, 4, 8, ... */
+	cache->hold += 1 << cache->contention;		/* 8, 64, 512, ... */
  }
  
  char *aa_get_buffer(bool in_atomic)
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 4/4] apparmor: limit the number of buffers in percpu cache
  2023-10-17  9:21                     ` [PATCH v5 0/4] apparmor: cache buffers on percpu list if there is lock, contention John Johansen
                                         ` (2 preceding siblings ...)
  2023-10-17  9:25                       ` [PATCH v5 3/4] apparmor: experiment with faster backoff on global buffer John Johansen
@ 2023-10-17  9:26                       ` John Johansen
  2023-10-26  5:13                       ` [PATCH v5 0/4] apparmor: cache buffers on percpu list if there is lock, contention Sergey Senozhatsky
  4 siblings, 0 replies; 21+ messages in thread
From: John Johansen @ 2023-10-17  9:26 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Anil Altinay, Sebastian Andrzej Siewior, LKLM, Peter Zijlstra,
	Tomasz Figa, linux-security-module, John Johansen

Force buffers to be returned to the global pool, regardless of contention
when the percpu cache is full. This ensures that the percpu buffer list
never grows longer than needed.

Signed-off-by: John Johansen <john.johansen@canonical.com>
---
  security/apparmor/lsm.c | 9 ++++++++-
  1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
index 52423d88854a..e6765f64f6bf 100644
--- a/security/apparmor/lsm.c
+++ b/security/apparmor/lsm.c
@@ -56,6 +56,7 @@ struct aa_local_cache {
  	struct list_head head;
  };
  
+#define MAX_LOCAL_COUNT 2
  #define RESERVE_COUNT 2
  static int reserve_count = RESERVE_COUNT;
  static int buffer_count;
@@ -1878,9 +1879,15 @@ void aa_put_buffer(char *buf)
  
  	cache = get_cpu_ptr(&aa_local_buffers);
  	if (!cache->hold) {
+		bool must_lock = cache->count >= MAX_LOCAL_COUNT;
+
  		put_cpu_ptr(&aa_local_buffers);
  
-		if (spin_trylock(&aa_buffers_lock)) {
+		if (must_lock) {
+			spin_lock(&aa_buffers_lock);
+			goto locked;
+		} else if (spin_trylock(&aa_buffers_lock)) {
+		locked:
  			/* put back on global list */
  			list_add(&aa_buf->list, &aa_global_buffers);
  			buffer_count++;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 0/4] apparmor: cache buffers on percpu list if there is lock, contention
  2023-10-17  9:21                     ` [PATCH v5 0/4] apparmor: cache buffers on percpu list if there is lock, contention John Johansen
                                         ` (3 preceding siblings ...)
  2023-10-17  9:26                       ` [PATCH v5 4/4] apparmor: limit the number of buffers in percpu cache John Johansen
@ 2023-10-26  5:13                       ` Sergey Senozhatsky
  4 siblings, 0 replies; 21+ messages in thread
From: Sergey Senozhatsky @ 2023-10-26  5:13 UTC (permalink / raw)
  To: John Johansen
  Cc: Sergey Senozhatsky, Anil Altinay, Sebastian Andrzej Siewior,
	LKLM, Peter Zijlstra, Tomasz Figa, linux-security-module

On (23/10/17 02:21), John Johansen wrote:
> > > yeah, testing help is always much appreciated. I have a v4, and I am
> > > working on 3 alternate version to compare against, to help give a better
> > > sense if we can get away with simplifying or tweak the scaling.
> > > 
> > > I should be able to post them out some time tonight.
> > 
> > Hi John,
> > 
> > Did you get a chance to post v4? I may be able to give it some testing
> > on our real-life case.
> 
> sorry yes, how about a v5. That is simplified with 3 follow on patches
> that aren't strictly necessary, but some combination of them might be
> better than just the base patch, but splitting them out makes the
> individual changes easier to review.

Sorry for late reply. So I gave it a try but, apparently, our build
environment has changed quite significantly since the last time I
looked into it.

I don't see that many aa_get/put_buffer() anymore. apparmor buffer
functions are mostly called form the exec path:

	security_bprm_creds_for_exec()
	 apparmor_bprm_creds_for_exec()
	  make_vfsuid()
	   aa_get_buffer()

As for vfs_statx()->...->apparmor_inode_getattr()->aa_path_perm(),
that path is bpf_lsm_inode_getsecid() now.

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2023-10-26  5:13 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-13 13:19 apparmor: global buffers spin lock may get contended Sergey Senozhatsky
2021-08-15  9:47 ` John Johansen
2022-10-28  9:34 ` John Johansen
2022-10-31  3:52   ` Sergey Senozhatsky
2022-10-31  3:55     ` John Johansen
2022-10-31  4:04       ` Sergey Senozhatsky
2023-02-17  0:03         ` John Johansen
2023-02-17  0:08       ` [PATCH v3] " John Johansen
2023-02-17 10:44         ` Sebastian Andrzej Siewior
2023-02-20  8:42           ` John Johansen
2023-02-21 21:27             ` Anil Altinay
2023-06-26 23:35               ` Anil Altinay
     [not found]               ` <CACCxZWO-+M-J_enENr7q1WDcu1U8vYFoytqJxAh=x-nuP268zA@mail.gmail.com>
2023-06-27  0:31                 ` John Johansen
2023-10-06  4:18                   ` Sergey Senozhatsky
2023-10-17  9:21                     ` [PATCH v5 0/4] apparmor: cache buffers on percpu list if there is lock, contention John Johansen
2023-10-17  9:23                       ` [PATCH v5 1/4] " John Johansen
2023-10-17  9:24                       ` [PATCH v5 2/4] apparmor: exponential backoff on cache buffer contention John Johansen
2023-10-17  9:25                       ` [PATCH v5 3/4] apparmor: experiment with faster backoff on global buffer John Johansen
2023-10-17  9:26                       ` [PATCH v5 4/4] apparmor: limit the number of buffers in percpu cache John Johansen
2023-10-26  5:13                       ` [PATCH v5 0/4] apparmor: cache buffers on percpu list if there is lock, contention Sergey Senozhatsky
     [not found] ` <20221030013028.3557-1-hdanton@sina.com>
2022-10-30  6:32   ` apparmor: global buffers spin lock may get contended John Johansen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).