linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Reinette Chatre <reinette.chatre@intel.com>
To: Greg KH <gregkh@linuxfoundation.org>
Cc: <dave.hansen@linux.intel.com>, <jarkko@kernel.org>,
	<tglx@linutronix.de>, <bp@alien8.de>, <mingo@redhat.com>,
	<linux-sgx@vger.kernel.org>, <x86@kernel.org>,
	<seanjc@google.com>, <tony.luck@intel.com>, <hpa@zytor.com>,
	<linux-kernel@vger.kernel.org>, <stable@vger.kernel.org>
Subject: Re: [PATCH] x86/sgx: Fix free page accounting
Date: Mon, 8 Nov 2021 11:19:20 -0800	[thread overview]
Message-ID: <1943c7d5-1840-9e27-ae03-96489ab8766a@intel.com> (raw)
In-Reply-To: <YYTY60T7YzycEAmp@kroah.com>

Hi Greg,

On 11/5/2021 12:10 AM, Greg KH wrote:
> On Thu, Nov 04, 2021 at 01:57:31PM -0700, Reinette Chatre wrote:
>> Hi Greg,
>>
>> On 11/4/2021 11:54 AM, Greg KH wrote:
>>> On Thu, Nov 04, 2021 at 11:28:54AM -0700, Reinette Chatre wrote:
>>>> The SGX driver maintains a single global free page counter,
>>>> sgx_nr_free_pages, that reflects the number of free pages available
>>>> across all NUMA nodes. Correspondingly, a list of free pages is
>>>> associated with each NUMA node and sgx_nr_free_pages is updated
>>>> every time a page is added or removed from any of the free page
>>>> lists. The main usage of sgx_nr_free_pages is by the reclaimer
>>>> that will run when the total free pages go below a watermark to
>>>> ensure that there are always some free pages available to, for
>>>> example, support efficient page faults.
>>>>
>>>> With sgx_nr_free_pages accessed and modified from a few places
>>>> it is essential to ensure that these accesses are done safely but
>>>> this is not the case. sgx_nr_free_pages is sometimes accessed
>>>> without any protection and when it is protected it is done
>>>> inconsistently with any one of the spin locks associated with the
>>>> individual NUMA nodes.
>>>>
>>>> The consequence of sgx_nr_free_pages not being protected is that
>>>> its value may not accurately reflect the actual number of free
>>>> pages on the system, impacting the availability of free pages in
>>>> support of many flows. The problematic scenario is when the
>>>> reclaimer never runs because it believes there to be sufficient
>>>> free pages while any attempt to allocate a page fails because there
>>>> are no free pages available. The worst scenario observed was a
>>>> user space hang because of repeated page faults caused by
>>>> no free pages ever made available.
>>>>
>>>> Change the global free page counter to an atomic type that
>>>> ensures simultaneous updates are done safely. While doing so, move
>>>> the updating of the variable outside of the spin lock critical
>>>> section to which it does not belong.
>>>>
>>>> Cc: stable@vger.kernel.org
>>>> Fixes: 901ddbb9ecf5 ("x86/sgx: Add a basic NUMA allocation scheme to sgx_alloc_epc_page()")
>>>> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
>>>> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
>>>> ---
>>>>    arch/x86/kernel/cpu/sgx/main.c | 12 ++++++------
>>>>    1 file changed, 6 insertions(+), 6 deletions(-)
>>>>
>>>> diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
>>>> index 63d3de02bbcc..8558d7d5f3e7 100644
>>>> --- a/arch/x86/kernel/cpu/sgx/main.c
>>>> +++ b/arch/x86/kernel/cpu/sgx/main.c
>>>> @@ -28,8 +28,7 @@ static DECLARE_WAIT_QUEUE_HEAD(ksgxd_waitq);
>>>>    static LIST_HEAD(sgx_active_page_list);
>>>>    static DEFINE_SPINLOCK(sgx_reclaimer_lock);
>>>> -/* The free page list lock protected variables prepend the lock. */
>>>> -static unsigned long sgx_nr_free_pages;
>>>> +atomic_long_t sgx_nr_free_pages = ATOMIC_LONG_INIT(0);
>>>>    /* Nodes with one or more EPC sections. */
>>>>    static nodemask_t sgx_numa_mask;
>>>> @@ -403,14 +402,15 @@ static void sgx_reclaim_pages(void)
>>>>    		spin_lock(&node->lock);
>>>>    		list_add_tail(&epc_page->list, &node->free_page_list);
>>>> -		sgx_nr_free_pages++;
>>>>    		spin_unlock(&node->lock);
>>>> +		atomic_long_inc(&sgx_nr_free_pages);
>>>>    	}
>>>>    }
>>>>    static bool sgx_should_reclaim(unsigned long watermark)
>>>>    {
>>>> -	return sgx_nr_free_pages < watermark && !list_empty(&sgx_active_page_list);
>>>> +	return atomic_long_read(&sgx_nr_free_pages) < watermark &&
>>>> +	       !list_empty(&sgx_active_page_list);

...

>>> The value changes were happening safely, it was just the reading of the
>>> value that was not.  You have not changed the fact that the value can
>>> change right after reading given that there was not going to be a
>>> problem with reading a stale value before.
>>
>> I am testing on a two socket system and I am seeing that the value of
>> sgx_nr_free_pages does not accurately reflect the actual free pages.
>>
>> It does not look to me like the value is updated safely as it is updated
>> with inconsistent protection on a system like this. There is a spin lock
>> associated with each NUMA node and which lock is used to update the variable
>> depends on which NUMA node memory is being modified - it is not always
>> protected with the same lock:
>>
>> spin_lock(&node->lock);
>> sgx_nr_free_pages++;
>> spin_unlock(&node->lock);
> 
> Ah, I missed that the original code was using a different lock for every
> call place, while now you are just using a single lock (the atomic value
> itself.)  That makes more sense, sorry for the noise.
> 
> But isn't this going to cause more thrashing and slow things down as you
> are hitting the "global" lock for this variable for every allocation?
> Or is this not in the hot path?

I do see this as being in the hot path as it is in the page fault 
handling flow. A global lock does seem to be required since it is a 
single free page count that directs the reclaimer and that counter needs 
to be updated safely.

I obtained access to another two socket system where I can test this 
issue. Since this system also has two NUMA nodes it updates the global 
counter unsafely but on this system I am not encountering the user space 
hang and can thus test how much things are being slowed down by this fix.

Interesting is that without the fix the test is actually slightly 
_slower_ than with the fix. I am speculating now that the issue is 
indeed also encountered on this system also but not noticed because the 
global counter can correct itself after some time and not get stuck as 
on the other system from which I sent the long traces.

Here are four runs without the fix:
real    0m47.024s 0m47.433s 0m47.547s 0m47.569s
user    0m7.204s  0m7.292s  0m7.265s  0m7.388s
sys     0m39.806s 0m40.129s 0m40.271s 0m40.168s

Here are four runs with the fix:
real    0m46.893s 0m47.328s 0m46.786s 0m46.635s
user    0m7.351s  0m7.252s  0m7.130s  0m7.170s
sys     0m39.528s 0m40.063s 0m39.642s 0m39.452s


>> Here is the perf top trace before this patch is applied showing how user
>> space is stuck hitting page faults over and over:
>>     PerfTop:    4569 irqs/sec  kernel:25.0%  exact: 100.0% lost: 0/0 drop:
>> 0/0 [4000Hz cycles],  (all, 224 CPUs)
> 
> <ascii art that line-wrapped snipped>

Sorry about that.

> 
>> With this patch the test is able to complete and the tracing shows that the
>> reclaimer is getting a chance to run. Previously the system was spending
>> almost all its time in page faults but now the time is split between the
>> page faults and the reclaimer.
>>
>>
>>     PerfTop:    7432 irqs/sec  kernel:81.5%  exact: 100.0% lost: 0/0 drop:
>> 0/0 [4000Hz cycles],  (all, 224 CPUs)
> 
> Ok, that's better, you need the reclaim in order to make forward
> progress.
> 
> Thanks for the detailed explaination, no objection from me, sorry for
> the misunderstanding.

Thank you very much for taking a look. It is much appreciated.

Reinette

  reply	other threads:[~2021-11-08 19:20 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-04 18:28 [PATCH] x86/sgx: Fix free page accounting Reinette Chatre
2021-11-04 18:36 ` Luck, Tony
2021-11-04 18:44   ` Reinette Chatre
2021-11-04 18:54 ` Greg KH
2021-11-04 19:04   ` Dave Hansen
2021-11-04 20:57   ` Reinette Chatre
2021-11-05  7:10     ` Greg KH
2021-11-08 19:19       ` Reinette Chatre [this message]
2021-11-07 16:45 ` Jarkko Sakkinen
2021-11-07 16:47   ` Jarkko Sakkinen
2021-11-08 19:48     ` Reinette Chatre
2021-11-08 20:12       ` Jarkko Sakkinen
2021-11-08 20:56         ` Reinette Chatre
2021-11-09  1:30           ` Jarkko Sakkinen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1943c7d5-1840-9e27-ae03-96489ab8766a@intel.com \
    --to=reinette.chatre@intel.com \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=hpa@zytor.com \
    --cc=jarkko@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-sgx@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=seanjc@google.com \
    --cc=stable@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).