From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F16A9C433DF for ; Tue, 26 May 2020 15:45:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 935B020663 for ; Tue, 26 May 2020 15:45:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="sXcb3w0e" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 935B020663 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3EED7800B5; Tue, 26 May 2020 11:45:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C4B280010; Tue, 26 May 2020 11:45:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2DA55800B5; Tue, 26 May 2020 11:45:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0178.hostedemail.com [216.40.44.178]) by kanga.kvack.org (Postfix) with ESMTP id 16B4780010 for ; Tue, 26 May 2020 11:45:54 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id CDD24180AD804 for ; Tue, 26 May 2020 15:45:53 +0000 (UTC) X-FDA: 76859295786.08.vest56_8abb204394202 X-HE-Tag: vest56_8abb204394202 X-Filterd-Recvd-Size: 5568 Received: from mail-qv1-f66.google.com (mail-qv1-f66.google.com [209.85.219.66]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 May 2020 15:45:53 +0000 (UTC) Received: by mail-qv1-f66.google.com with SMTP id er16so9659803qvb.0 for ; Tue, 26 May 2020 08:45:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=H9NFd0yHlWGmdk/EMdYVrUqJ2vOLGczXKpbxrO9cBaM=; b=sXcb3w0eQMDKcMmANV3bqeRrIPN376XqoY+mlxj5Ut31RFJurt5tGbdEwZIq3dbD4O 4Xc1eosEIel58VXlkkgjWLU6wV7f/ifJVJIY07JVrM+eeQDSfAJ1DRQpeBWK9MAn05CM f65q0wh613K5qqBw7J2PgH74PTHqbY3svEhQF6173TcXPGfhIiy6D76vMl51zT4i5OQH czyiNq+uFO6cGRDKUE2rCPkKOLxAxLqoLRpcQ/uA1g3ZUDLGtqUNp6RFcDu4AbU0e8fY KqkZS1lI+fdy90NvlaVPglUzNlxWIrthSeVPMKt9kLvjdd+hs3o1FdwqWi0WbI2I2VAE p/JA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=H9NFd0yHlWGmdk/EMdYVrUqJ2vOLGczXKpbxrO9cBaM=; b=f+28Hs5Rlqe9+CIwBYrbfsahokZjbh9DXN4ZGBlzXE/KOc/D+QDSvIyYkvz25hNJGE 2HmYQReDnXADpFCwKYBnKzZurDoBECUu3Ag4X+Po8QN5Kjpzh2rYIRo8vmbeKq1PE2Wf Up4kBTRkUduifc7v0VH1zdNJh/cyg9GHHIbuhvms6/N0NGSvexU4JQu56l+6r5PJnm7x 02fave3Cp65e+SenpkbnPOpp2HTjI/GwC9qtm/rdBex4qvofJDZggbDy1xcotAgNI72r P5QsAtEjjJE/tjjkLdU9zxHjRsvf44ZfmnvRrnm1TrDLXkdwmNcQObaQ4rydqaQh0Akg EprQ== X-Gm-Message-State: AOAM532+95vPQJQQzFAy8aMQyglwbqZyVGZ3h5WynbD5aZdkscSep9i4 TfwqmeMCcjqX6aJCAHraVyoyQQ== X-Google-Smtp-Source: ABdhPJyGbmCuym3Nq9WvLw7q9NvhV/jtJ2jDr9IOuxTqLV6KU5lEVPsyp3uWaVf/cbyx7wDkA7kjDQ== X-Received: by 2002:ad4:404b:: with SMTP id r11mr21375700qvp.44.1590507952570; Tue, 26 May 2020 08:45:52 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:8152]) by smtp.gmail.com with ESMTPSA id a27sm23969qtc.92.2020.05.26.08.45.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 May 2020 08:45:51 -0700 (PDT) Date: Tue, 26 May 2020 11:45:28 -0400 From: Johannes Weiner To: Hugh Dickins Cc: Andrew Morton , Alex Shi , Joonsoo Kim , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH mmotm] mm/swap: fix livelock in __read_swap_cache_async() Message-ID: <20200526154528.GA850116@cmpxchg.org> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, May 21, 2020 at 10:56:20PM -0700, Hugh Dickins wrote: > I've only seen this livelock on one machine (repeatably, but not to > order), and not fully analyzed it - two processes seen looping around > getting -EEXIST from swapcache_prepare(), I guess a third (at lower > priority? but wanting the same cpu as one of the loopers? preemption > or cond_resched() not enough to let it back in?) set SWAP_HAS_CACHE, > then went off into direct reclaim, scheduled away, and somehow could > not get back to add the page to swap cache and let them all complete. > > Restore the page allocation in __read_swap_cache_async() to before > the swapcache_prepare() call: "mm: memcontrol: charge swapin pages > on instantiation" moved it outside the loop, which indeed looks much > nicer, but exposed this weakness. We used to allocate new_page once > and then keep it across all iterations of the loop: but I think that > just optimizes for a rare case, and complicates the flow, so go with > the new simpler structure, with allocate+free each time around (which > is more considerate use of the memory too). > > Fix the comment on the looping case, which has long been inaccurate: > it's not a racing get_swap_page() that's the problem here. > > Fix the add_to_swap_cache() and mem_cgroup_charge() error recovery: > not swap_free(), but put_swap_page() to undo SWAP_HAS_CACHE, as was > done before; but delete_from_swap_cache() already includes it. > > And one more nit: I don't think it makes any difference in practice, > but remove the "& GFP_KERNEL" mask from the mem_cgroup_charge() call: > add_to_swap_cache() needs that, to convert gfp_mask from user and page > cache allocation (e.g. highmem) to radix node allocation (lowmem), but > we don't need or usually apply that mask when charging mem_cgroup. > > Signed-off-by: Hugh Dickins > --- Acked-by: Johannes Weiner > Mostly fixing mm-memcontrol-charge-swapin-pages-on-instantiation.patch > but now I see that mm-memcontrol-delete-unused-lrucare-handling.patch > made a further change here (took an arg off the mem_cgroup_charge call): > as is, this patch is diffed to go on top of both of them, and better > that I get it out now for Johannes look at; but could be rediffed for > folding into blah-instantiation.patch later. IMO it's worth having as a separate change. Joonsoo was concerned about the ordering but I didn't see it. Having this sequence of changes on record would be good for later reference.