From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.5 required=3.0 tests=DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 072FAECE58C for ; Wed, 9 Oct 2019 16:49:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B738021848 for ; Wed, 9 Oct 2019 16:49:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="uWNLLt4F" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B738021848 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 49F1A8E0005; Wed, 9 Oct 2019 12:49:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 44F6B8E0003; Wed, 9 Oct 2019 12:49:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 33D808E0005; Wed, 9 Oct 2019 12:49:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id 0D0578E0003 for ; Wed, 9 Oct 2019 12:49:48 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id A023D180AD7C3 for ; Wed, 9 Oct 2019 16:49:47 +0000 (UTC) X-FDA: 76024832814.21.dress69_7285a40269e18 X-HE-Tag: dress69_7285a40269e18 X-Filterd-Recvd-Size: 5665 Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Wed, 9 Oct 2019 16:49:46 +0000 (UTC) Received: by mail-lj1-f193.google.com with SMTP id f5so3197642ljg.8 for ; Wed, 09 Oct 2019 09:49:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=SkGcyVg51H0E4AFHBmfSaZnSH8nue7geHmrbA1SXwgw=; b=uWNLLt4FV01tXR42m0BzyJwNl1r244JLyctfZn3BKgVSv8nrE2xMlQExCd8KFyjmoT tJoDvZUsuEvEO0Rvk/HUBFEgScUx4csk/VmA86Bc5NqNo6fPkq4jZqzLJt8iWhfGWGce aJZS8y2YAas3LDhAcmUNE26ifi+pxy8q6uMFEM+yzfkPhD6EnwTDFLALmpPCBQenMJl5 YHYgJjgPku/rq/0eVi8SZ37AJBvyxjMZ8NrE+279ZX8itZqCjZCCzaePaY94FCu7QpmB Yxp2cxCQvJLKAv/vHf8g2j9RcaaWr4oTu8USsy6m4DBdniUQFhbvkVJTCDrN/XnYT2ki VYPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=SkGcyVg51H0E4AFHBmfSaZnSH8nue7geHmrbA1SXwgw=; b=rDnPZUo8tNwv0xDAMleI9fLV1P6sh6gv4UgWley6sPzqMiLYzp12+cny67wL2pFXAL GIGGDA989/6DSrHszgglJoMiC9FuBhR/kBwbLxRItPs3wNlg06YxN7Xwf/BQOPdVcVOb 4ORPsZ5EYKalMEaQroxx0PnxU3Do2IFd/6myXkym6wnz5jVzmztHsV/wZVhbANDYwZWh eSatxzmNfwZ2zvv6fpKt3DZMjsM/l/CzMvw92HVKUZ8JfFsx5Kd/diXdSTsHc95m7Pmt siKiBz6NjJC9IxvyHxkpAtCorjpEn3GSYGeYywXBDHEiyXvIKbiyv9S14QOaiLAA++GD lRWg== X-Gm-Message-State: APjAAAWh/vkiJGZmKzl61mRtxF7u44K4j1Vk4ZsynjKivce6KbWd39cf tVK83T7GdEGDRdRPUZjFypU= X-Google-Smtp-Source: APXvYqx8NBaZTGpzzgmaw4+w/0S+bRs+h195tjNsjKyxhK3dLzZj0MM8Vr8WJWzs4oe58eU6g9n1Ag== X-Received: by 2002:a05:651c:105c:: with SMTP id x28mr3057983ljm.114.1570639785225; Wed, 09 Oct 2019 09:49:45 -0700 (PDT) Received: from pc636.semobile.internal ([37.139.158.167]) by smtp.gmail.com with ESMTPSA id h5sm623557ljf.83.2019.10.09.09.49.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2019 09:49:44 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: Andrew Morton Cc: Daniel Wagner , Sebastian Andrzej Siewior , Thomas Gleixner , linux-mm@kvack.org, LKML , Peter Zijlstra , Uladzislau Rezki , Hillf Danton , Michal Hocko , Matthew Wilcox , Oleksiy Avramchenko , Steven Rostedt Subject: [PATCH 1/1] mm/vmalloc: remove preempt_disable/enable when do preloading Date: Wed, 9 Oct 2019 18:49:34 +0200 Message-Id: <20191009164934.10166-1-urezki@gmail.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Get rid of preempt_disable() and preempt_enable() when the preload is done for splitting purpose. The reason is that calling spin_lock() with disabled preemtion is forbidden in CONFIG_PREEMPT_RT kernel. Therefore, we do not guarantee that a CPU is preloaded, instead we minimize the case when it is not with this change. For example i run the special test case that follows the preload pattern and path. 20 "unbind" threads run it and each does 1000000 allocations. Only 3.5 times among 1000000 a CPU was not preloaded thus. So it can happen but the number is rather negligible. Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for spl= it purpose") Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index e92ff5f7dd8b..2ed6fef86950 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1078,9 +1078,12 @@ static struct vmap_area *alloc_vmap_area(unsigned = long size, =20 retry: /* - * Preload this CPU with one extra vmap_area object to ensure - * that we have it available when fit type of free area is - * NE_FIT_TYPE. + * Preload this CPU with one extra vmap_area object. It is used + * when fit type of free area is NE_FIT_TYPE. Please note, it + * does not guarantee that an allocation occurs on a CPU that + * is preloaded, instead we minimize the case when it is not. + * It can happen because of migration, because there is a race + * until the below spinlock is taken. * * The preload is done in non-atomic context, thus it allows us * to use more permissive allocation masks to be more stable under @@ -1089,20 +1092,16 @@ static struct vmap_area *alloc_vmap_area(unsigned= long size, * Even if it fails we do not really care about that. Just proceed * as it is. "overflow" path will refill the cache we allocate from. */ - preempt_disable(); - if (!__this_cpu_read(ne_fit_preload_node)) { - preempt_enable(); + if (!this_cpu_read(ne_fit_preload_node)) { pva =3D kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node); - preempt_disable(); =20 - if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) { + if (this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) { if (pva) kmem_cache_free(vmap_area_cachep, pva); } } =20 spin_lock(&vmap_area_lock); - preempt_enable(); =20 /* * If an allocation fails, the "vend" address is --=20 2.20.1