From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96E81C43331 for ; Fri, 3 Apr 2020 23:33:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 661EC20737 for ; Fri, 3 Apr 2020 23:33:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726246AbgDCXdf (ORCPT ); Fri, 3 Apr 2020 19:33:35 -0400 Received: from mga02.intel.com ([134.134.136.20]:28420 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725980AbgDCXde (ORCPT ); Fri, 3 Apr 2020 19:33:34 -0400 IronPort-SDR: aRmYzDg3MziUlqCo5LITaiEqrAMH4wKEm4W29xr8ASI7gAhV0Y0DXTnP1en+HgqAcKiUfDsLKx bZyX0DBWbJaA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Apr 2020 16:33:34 -0700 IronPort-SDR: jsEtjvQNA9fmT87sLO2jJJCT40eIQGxaZ4/cAJqigUAWtzApV10M+yyUbd3IBeyCCjKzFLmfuM m9Clg8Np28+g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,341,1580803200"; d="scan'208";a="243587534" Received: from jmarndt-mobl1.amr.corp.intel.com (HELO localhost) ([10.252.39.65]) by orsmga008.jf.intel.com with ESMTP; 03 Apr 2020 16:33:32 -0700 Date: Sat, 4 Apr 2020 02:33:31 +0300 From: Jarkko Sakkinen To: linux-sgx@vger.kernel.org Cc: Sean Christopherson , Haitao Huang Subject: Re: [PATCH v2] x86/sgx: Fix deadlock and race conditions between fork() and EPC reclaim Message-ID: <20200403233237.GA11802@linux.intel.com> References: <20200403093550.104789-1-jarkko.sakkinen@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200403093550.104789-1-jarkko.sakkinen@linux.intel.com> Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org On Fri, Apr 03, 2020 at 12:35:50PM +0300, Jarkko Sakkinen wrote: > From: Sean Christopherson > > Drop the synchronize_srcu() from sgx_encl_mm_add() and replace it with a > mm_list versioning concept to avoid deadlock when adding a mm during > dup_mmap()/fork(), and to ensure copied PTEs are zapped. > > When dup_mmap() runs, it holds mmap_sem for write in both the old mm and > new mm. Invoking synchronize_srcu() while holding mmap_sem of a mm that > is already attached to the enclave will deadlock if the reclaimer is in > the process of walking mm_list, as the reclaimer will try to acquire > mmap_sem (of the old mm) while holding encl->srcu for read. > > INFO: task ksgxswapd:181 blocked for more than 120 seconds. > ksgxswapd D 0 181 2 0x80004000 > Call Trace: > __schedule+0x2db/0x700 > schedule+0x44/0xb0 > rwsem_down_read_slowpath+0x370/0x470 > down_read+0x95/0xa0 > sgx_reclaim_pages+0x1d2/0x7d0 > ksgxswapd+0x151/0x2e0 > kthread+0x120/0x140 > ret_from_fork+0x35/0x40 > > INFO: task fork_consistenc:18824 blocked for more than 120 seconds. > fork_consistenc D 0 18824 18786 0x00004320 > Call Trace: > __schedule+0x2db/0x700 > schedule+0x44/0xb0 > schedule_timeout+0x205/0x300 > wait_for_completion+0xb7/0x140 > __synchronize_srcu.part.22+0x81/0xb0 > synchronize_srcu_expedited+0x27/0x30 > synchronize_srcu+0x57/0xe0 > sgx_encl_mm_add+0x12b/0x160 > sgx_vma_open+0x22/0x40 > dup_mm+0x521/0x580 > copy_process+0x1a56/0x1b50 > _do_fork+0x85/0x3a0 > __x64_sys_clone+0x8e/0xb0 > do_syscall_64+0x57/0x1b0 > entry_SYSCALL_64_after_hwframe+0x44/0xa9 > > Furthermore, doing synchronize_srcu() in sgx_encl_mm_add() does not > prevent the new mm from having stale PTEs pointing at the EPC page to be > reclaimed. dup_mmap() calls vm_ops->open()/sgx_encl_mm_add() _after_ > PTEs are copied to the new mm, i.e. blocking fork() until reclaim zaps > the old mm is pointless as the stale PTEs have already been created in > the new mm. > > All other flows that walk mm_list can safely race with dup_mmap() or are > protected by a different mechanism. Add comments to all srcu readers > that don't check the list version to document why its ok for the flow to > ignore the version. > > Note, synchronize_srcu() is still needed when removing a mm from an > enclave, as the srcu readers must complete their walk before the mm can > be freed. Removing a mm is never done while holding mmap_sem. > > Cc: Haitao Huang > Cc: Sean Christopherson > Signed-off-by: Sean Christopherson > Signed-off-by: Jarkko Sakkinen > --- > v2: > * Remove smp_wmb() as x86 does not reorder writes in the pipeline. > * Refine comments to be more to the point and more maintainable when > things might change. > * Replace the ad hoc (goto-based) loop construct with a proper loop > construct. > arch/x86/kernel/cpu/sgx/encl.c | 11 +++++++-- > arch/x86/kernel/cpu/sgx/encl.h | 1 + > arch/x86/kernel/cpu/sgx/reclaim.c | 39 +++++++++++++++++++++---------- > 3 files changed, 37 insertions(+), 14 deletions(-) > > diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c > index e0124a2f22d5..5b15352b3d4f 100644 > --- a/arch/x86/kernel/cpu/sgx/encl.c > +++ b/arch/x86/kernel/cpu/sgx/encl.c > @@ -196,6 +196,9 @@ int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm) > struct sgx_encl_mm *encl_mm; > int ret; > > + /* mm_list can be accessed only by a single thread at a time. */ > + lockdep_assert_held_write(&mm->mmap_sem); > + > if (atomic_read(&encl->flags) & SGX_ENCL_DEAD) > return -EINVAL; > > @@ -221,12 +224,16 @@ int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm) > return ret; > } > > + /* > + * The page reclaimer uses list version for synchronization instead of > + * synchronize_scru() because otherwise we could conflict with > + * dup_mmap(). > + */ > spin_lock(&encl->mm_lock); > list_add_rcu(&encl_mm->list, &encl->mm_list); > + encl->mm_list_version++; > spin_unlock(&encl->mm_lock); > > - synchronize_srcu(&encl->srcu); > - > return 0; > } > > diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h > index 44b353aa8866..f0f72e591244 100644 > --- a/arch/x86/kernel/cpu/sgx/encl.h > +++ b/arch/x86/kernel/cpu/sgx/encl.h > @@ -74,6 +74,7 @@ struct sgx_encl { > struct mutex lock; > struct list_head mm_list; > spinlock_t mm_lock; > + unsigned long mm_list_version; > struct file *backing; > struct kref refcount; > struct srcu_struct srcu; > diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c > index 39f0ddefbb79..5fb8bdfa6a1f 100644 > --- a/arch/x86/kernel/cpu/sgx/reclaim.c > +++ b/arch/x86/kernel/cpu/sgx/reclaim.c > @@ -184,28 +184,38 @@ static void sgx_reclaimer_block(struct sgx_epc_page *epc_page) > struct sgx_encl_page *page = epc_page->owner; > unsigned long addr = SGX_ENCL_PAGE_ADDR(page); > struct sgx_encl *encl = page->encl; > + unsigned long mm_list_version; > struct sgx_encl_mm *encl_mm; > struct vm_area_struct *vma; > int idx, ret; > > - idx = srcu_read_lock(&encl->srcu); > + do { > + mm_list_version = encl->mm_list_version; > > - list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) { > - if (!mmget_not_zero(encl_mm->mm)) > - continue; > + /* Fence reads as the CPU can reorder them. This guarantees > + * that we don't access old list with a new version. > + */ > + smp_rmb(); > > - down_read(&encl_mm->mm->mmap_sem); > + idx = srcu_read_lock(&encl->srcu); > > - ret = sgx_encl_find(encl_mm->mm, addr, &vma); > - if (!ret && encl == vma->vm_private_data) > - zap_vma_ptes(vma, addr, PAGE_SIZE); > + list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) { > + if (!mmget_not_zero(encl_mm->mm)) > + continue; > > - up_read(&encl_mm->mm->mmap_sem); > + down_read(&encl_mm->mm->mmap_sem); > > - mmput_async(encl_mm->mm); > - } > + ret = sgx_encl_find(encl_mm->mm, addr, &vma); > + if (!ret && encl == vma->vm_private_data) > + zap_vma_ptes(vma, addr, PAGE_SIZE); > > - srcu_read_unlock(&encl->srcu, idx); > + up_read(&encl_mm->mm->mmap_sem); > + > + mmput_async(encl_mm->mm); > + } > + > + srcu_read_unlock(&encl->srcu, idx); > + } while (unlikely(encl->mm_list_version != mm_list_version)); This is bad, or at least makes the code unnecessarily hard to understand given the use of the fences: encl->mm_list_version is read twice per iteration. I'll change the code to use "for ( ; ; )" and have exactly one read per iteration. /Jarko