From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5169C433EF for ; Tue, 1 Mar 2022 21:12:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238008AbiCAVNT (ORCPT ); Tue, 1 Mar 2022 16:13:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237439AbiCAVNP (ORCPT ); Tue, 1 Mar 2022 16:13:15 -0500 Received: from mail-yw1-x112f.google.com (mail-yw1-x112f.google.com [IPv6:2607:f8b0:4864:20::112f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B145F625A for ; Tue, 1 Mar 2022 13:12:31 -0800 (PST) Received: by mail-yw1-x112f.google.com with SMTP id 00721157ae682-2dbd97f9bfcso41075727b3.9 for ; Tue, 01 Mar 2022 13:12:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=SLTMGPlk7m9BUyey2nJimGnjymGnIfWBWhbFIZII+Cg=; b=bsXrQVc+XxaRTYpztFqa81b3Vx8euJnUzbRJl6nb/1/B9k1+VBpi9QcxKe3T9QfMKO 6bMfYTAYERmvRHNLW28I3N6SE7+zW12oJ3PH/zS4Nfprv7wUMFh93pqeo06jmIo3YVmz yoL6E7hX2VMI4rgzgn8UhOv3eWKPeOtcFrZDY5gchLbOcP26ywBGdWR2EID68APeNG32 hXhwT9nK/1nCLckLsghw6bO4YI5joM7h7ceZ/AG1yQUEb1j+AgcgmRA17cIe+dkYFESU tiBlZRwYYJ9nbINAVpSjBWU+IRkxj4CxNTksScBzv1F+LXd8fskEALroe5PYtWh3IUym AT3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=SLTMGPlk7m9BUyey2nJimGnjymGnIfWBWhbFIZII+Cg=; b=ZGOwk/HGNomv+LsY2nF8lKdZkkGqOM3sNvuyKWVoQGJmj0DpRLMnjappuqKJC3jRxo VRrs4PcTpDK0mLAuSHKdHnjvfAPx1Sm5Dhwt4PjOyu2OTAkWi3zObFGS/WU78q7gfbPs kRK4VWHhb45C+hBgYojtbVoRTMk27LlSj1o1g3AIbXPpVZxNBHbr/DqyHvhcDWPmIAu6 bDNFZe867cFSgJ++QPReF/mJx0F9D/qgUH8cP/5idZGAxmnTcXGFu2T3ZFWYqwEOwL00 hKIzkH42ENiEFC+jdjY+TBG3t0h+iZtzi+LpoYEL3Balpmmdio7Zfv9eS8hs0CjpEmSa SJaQ== X-Gm-Message-State: AOAM533G8AziZa15h4aIM0JLJDpTks8E80mXoSTIShcvI7ig7gfqXPSZ D1h6kotEbck8UZvu4tTUx3HUqzbL5NsUGjPqt+/9J14kpeIN+A== X-Google-Smtp-Source: ABdhPJwjyao+3UgsRIhwaiYtGiQu68noHsiYWnmRF190avJuNzhOLjV5QYk16k3JrPaACcrg/WsHdFmfSbXwKtZIEAQ= X-Received: by 2002:a81:6982:0:b0:2db:2a9f:302c with SMTP id e124-20020a816982000000b002db2a9f302cmr19893143ywc.237.1646169150679; Tue, 01 Mar 2022 13:12:30 -0800 (PST) MIME-Version: 1.0 References: <20220225012819.1807147-1-surenb@google.com> <20220301122520.GB23924@pathway.suse.cz> In-Reply-To: <20220301122520.GB23924@pathway.suse.cz> From: Suren Baghdasaryan Date: Tue, 1 Mar 2022 13:12:19 -0800 Message-ID: Subject: Re: [RFC 1/1] mm: page_alloc: replace mm_percpu_wq with kthreads in drain_all_pages To: Petr Mladek Cc: Andrew Morton , Johannes Weiner , Michal Hocko , Peter Zijlstra , Roman Gushchin , Shakeel Butt , Minchan Kim , Tim Murray , linux-mm , LKML , kernel-team Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 1, 2022 at 4:25 AM Petr Mladek wrote: > > On Thu 2022-02-24 17:28:19, Suren Baghdasaryan wrote: > > Sending as an RFC to confirm if this is the right direction and to > > clarify if other tasks currently executed on mm_percpu_wq should be > > also moved to kthreads. The patch seems stable in testing but I want > > to collect more performance data before submitting a non-RFC version. > > > > > > Currently drain_all_pages uses mm_percpu_wq to drain pages from pcp > > list during direct reclaim. The tasks on a workqueue can be delayed > > by other tasks in the workqueues using the same per-cpu worker pool. > > This results in sizable delays in drain_all_pages when cpus are highly > > contended. > > Memory management operations designed to relieve memory pressure should > > not be allowed to block by other tasks, especially if the task in direct > > reclaim has higher priority than the blocking tasks. > > Replace the usage of mm_percpu_wq with per-cpu low priority FIFO > > kthreads to execute draining tasks. > > > > Suggested-by: Petr Mladek > > Signed-off-by: Suren Baghdasaryan > > The patch looks good to me. See few comments below about things > where I was in doubts. But I do not see any real problem with > this approach. Thanks for the review, Petr. One question inline. Other than that I would like to check if: 1. Using low priority FIFO for these kthreads is warranted. From https://lore.kernel.org/all/CAEe=Sxmow-jx60cDjFMY7qi7+KVc+BT++BTdwC5+G9E=1soMmQ@mail.gmail.com/#t my understanding was that we want this work to be done by RT kthread_worker but maybe that's not appropriate here? 2. Do we want to move any other work done on mm_percpu_wq (vmstat_work, lru_add_drain_all) to these kthreads? If what I have currently is ok, I'll post the first version. Thanks, Suren. > > > --- > > mm/page_alloc.c | 84 ++++++++++++++++++++++++++++++++++++++++--------- > > 1 file changed, 70 insertions(+), 14 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 3589febc6d31..c9ab2cf4b05b 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -2209,6 +2210,58 @@ _deferred_grow_zone(struct zone *zone, unsigned int order) > > > > #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ > > > > +static void drain_local_pages_func(struct kthread_work *work); > > + > > +static int alloc_drain_worker(unsigned int cpu) > > +{ > > + struct pcpu_drain *drain; > > + > > + mutex_lock(&pcpu_drain_mutex); > > + drain = per_cpu_ptr(&pcpu_drain, cpu); > > + drain->worker = kthread_create_worker_on_cpu(cpu, 0, "pg_drain/%u", cpu); > > + if (IS_ERR(drain->worker)) { > > + drain->worker = NULL; > > + pr_err("Failed to create pg_drain/%u\n", cpu); > > + goto out; > > + } > > + /* Ensure the thread is not blocked by normal priority tasks */ > > + sched_set_fifo_low(drain->worker->task); > > + kthread_init_work(&drain->work, drain_local_pages_func); > > +out: > > + mutex_unlock(&pcpu_drain_mutex); > > + > > + return 0; > > +} > > + > > +static int free_drain_worker(unsigned int cpu) > > +{ > > + struct pcpu_drain *drain; > > + > > + mutex_lock(&pcpu_drain_mutex); > > + drain = per_cpu_ptr(&pcpu_drain, cpu); > > + kthread_cancel_work_sync(&drain->work); > > I do see not how CPU down was handled in the original code. > > Note that workqueues call unbind_workers() when a CPU > is going down. The pending work items might be proceed > on another CPU. From this POV, the new code looks more > safe. > > > + kthread_destroy_worker(drain->worker); > > + drain->worker = NULL; > > + mutex_unlock(&pcpu_drain_mutex); > > + > > + return 0; > > +} > > + > > +static void __init init_drain_workers(void) > > +{ > > + unsigned int cpu; > > + > > + for_each_online_cpu(cpu) > > + alloc_drain_worker(cpu); > > I though whether this need to be called under cpus_read_lock(); > And I think that the code should be safe as it is. There > is this call chain: > > + kernel_init_freeable() > + page_alloc_init_late() > + init_drain_workers() > > It is called after smp_init() but before the init process > is executed. I guess that nobody could trigger CPU hotplug > at this state. So there there is no need to synchronize > against it. Should I add a comment here to describe why we don't need cpus_read_lock here (due to init process not being active at this time)? > > > + > > + if (cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, > > + "page_alloc/drain:online", > > + alloc_drain_worker, > > + free_drain_worker)) { > > + pr_err("page_alloc_drain: Failed to allocate a hotplug state\n"); > > I am not sure if there are any special requirements about the > ordering vs. other CPU hotplug operations. > > Just note that the per-CPU workqueues are started/stopped > via CPUHP_AP_WORKQUEUE_ONLINE. They are available slightly > earlier before CPUHP_AP_ONLINE_DYN when the CPU is being > enabled. > > > + } > > +} > > + > > void __init page_alloc_init_late(void) > > { > > struct zone *zone; > > Best Regards, > Petr