From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26A28C10F14 for ; Thu, 3 Oct 2019 11:55:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D947A217D7 for ; Thu, 3 Oct 2019 11:55:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="On1xLbSv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D947A217D7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 503DD6B0005; Thu, 3 Oct 2019 07:55:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D9CF6B0006; Thu, 3 Oct 2019 07:55:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C97C6B0007; Thu, 3 Oct 2019 07:55:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id 1CE8D6B0005 for ; Thu, 3 Oct 2019 07:55:47 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id A7CC68243765 for ; Thu, 3 Oct 2019 11:55:46 +0000 (UTC) X-FDA: 76002319092.30.stone84_6c97c9c1f7a12 X-HE-Tag: stone84_6c97c9c1f7a12 X-Filterd-Recvd-Size: 5038 Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Thu, 3 Oct 2019 11:55:46 +0000 (UTC) Received: by mail-lj1-f193.google.com with SMTP id v24so2413773ljj.3 for ; Thu, 03 Oct 2019 04:55:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:date:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=2Paj185Z1oMKKyLaMGojqAMgFwG1hAhXKeOj1AF/nCI=; b=On1xLbSvqMHI0gYtjwASGqRJzgHTIvS7/BHXBEjUUgS9zNLVY8TJAJiqal1Ib1ZFAk eSCPLTjFLQJ6Mjt+mLrvB0GY8osuGj8H+FOCXfEIsnvOxWvC4e54kk5q6nGj7+nQl9sL CvZBQRWNmhCqXni0J31rsmLl7w26YaLU4Bz7n8OgM5NcjcNWDjxjW6tMcYkVTbWYAeqL i/Z1qPtjwFqKBMuB7gwyW01oUxHeMrYDlvkQNOmttECn5HqvvLLbnGXybhvCrXQHSidY wQmDdQ/7ro2zyth6kCpTXhwt56Kbnnlul8vFyAri1VCOCbBbXQwszKPanxdxjJ5rtKuV iseA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:date:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=2Paj185Z1oMKKyLaMGojqAMgFwG1hAhXKeOj1AF/nCI=; b=EoXPtEnb269AEt9J6j2MfcuAy6aXJqusegNc6RKUPZ+hb6ewXr3erGEPIKmNC7YVzK 2a1i1Qy3rjs/cw8SDtxkT078OGRcFa7pWrbXQ/3omdKJDv+mAKk0OyCpkQj0nSMdx/FD uIK1kB60I7QHk51Wp4ZdL6RRArFl6gXB+eoVueprDPI+M1fc7uWK0IOTOr3vcoFSgOJ3 cNZb2+xnwg5A3FLNaRBlT6cfSzLST7yPZ1nGkpiy81s+nMjKMwqWWdSp51x+CPuxwIaf JrHUrqcjax5krZHMMhV/TVE9mICzj6BqFDzYtj2qnf34KVLJyiBktmkuDqGIovrFG4ge i07w== X-Gm-Message-State: APjAAAXMBs8ySGyEwq9NmXS/jbWESSPZiOZ082VjYhcLM9Wbirw3t67f TIIemxVl1QseqicT10ksH4Q= X-Google-Smtp-Source: APXvYqxaeMa1FebqGpGSRyqlv0/0e3gSpyZbRoQVACXNZ9D8AG7XBbriRxQYi19pY2hmp8GIiix6RA== X-Received: by 2002:a2e:730a:: with SMTP id o10mr6021928ljc.214.1570103744494; Thu, 03 Oct 2019 04:55:44 -0700 (PDT) Received: from pc636 ([37.139.158.167]) by smtp.gmail.com with ESMTPSA id n25sm475996ljc.107.2019.10.03.04.55.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Oct 2019 04:55:43 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Thu, 3 Oct 2019 13:55:36 +0200 To: Daniel Wagner Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org, Andrew Morton , Uladzislau Rezki Subject: Re: [PATCH] mm: vmalloc: Use the vmap_area_lock to protect ne_fit_preload_node Message-ID: <20191003115536.GA15745@pc636> References: <20191003090906.1261-1-dwagner@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191003090906.1261-1-dwagner@suse.de> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Oct 03, 2019 at 11:09:06AM +0200, Daniel Wagner wrote: > Replace preempt_enable() and preempt_disable() with the vmap_area_lock > spin_lock instead. Calling spin_lock() with preempt disabled is > illegal for -rt. Furthermore, enabling preemption inside the > spin_lock() doesn't really make sense. > > Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for > split purpose") > Cc: Uladzislau Rezki (Sony) > Signed-off-by: Daniel Wagner > --- > mm/vmalloc.c | 9 +++------ > 1 file changed, 3 insertions(+), 6 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index 08c134aa7ff3..0d1175673583 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -1091,11 +1091,11 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, > * Even if it fails we do not really care about that. Just proceed > * as it is. "overflow" path will refill the cache we allocate from. > */ > - preempt_disable(); > + spin_lock(&vmap_area_lock); > if (!__this_cpu_read(ne_fit_preload_node)) { > - preempt_enable(); > + spin_unlock(&vmap_area_lock); > pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node); > - preempt_disable(); > + spin_lock(&vmap_area_lock); > > if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) { > if (pva) > @@ -1103,9 +1103,6 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, > } > } > > - spin_lock(&vmap_area_lock); > - preempt_enable(); > - > /* > * If an allocation fails, the "vend" address is > * returned. Therefore trigger the overflow path. > -- > 2.16.4 > Some background. The idea was to avoid of touching several times vmap_area_lock, therefore there are preempt_disable()/preempt_enable() instead, in order to stay on the same CPU. When it comes to PREEMPT_RT it is a problem, so Reviewed-by: Uladzislau Rezki (Sony) -- Vlad Rezki