From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 862A4C4332F for ; Tue, 18 Oct 2022 02:52:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230085AbiJRCwy (ORCPT ); Mon, 17 Oct 2022 22:52:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229554AbiJRCww (ORCPT ); Mon, 17 Oct 2022 22:52:52 -0400 Received: from mail-lf1-x133.google.com (mail-lf1-x133.google.com [IPv6:2a00:1450:4864:20::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AB60857F9; Mon, 17 Oct 2022 19:52:50 -0700 (PDT) Received: by mail-lf1-x133.google.com with SMTP id o12so11994284lfq.9; Mon, 17 Oct 2022 19:52:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=9M6zSM2YI+mr2xkSzd7bZIJc0rGYX5w30o9ggxyac70=; b=micluZpS+rdPpy7sIsZANMa5tslB3dvnpJSnGFkT30l3hPlHqlrFHzChtx8nL/fxAh TUO4BaMZWLOLtX+nf74D6X+wS+lOeqEi2JOhi1afjhzIkzoz2H6XJp0vO7p9Dzh68rEk PXNVStgIhbyo/4qKT1BJuzIfbYhfjKXHLUs6KHHXvEkNRTzti5VgLsHM3EM6bq6Hchh1 2rgSg+wh0Ev+eUIGmBwam1La5dSTilfH3cUddU0RhYDovEhs43vNzqIGVDY/Fy5Pjaw1 7iTP/KavCCsdSYNxPQwgSUFBqHCQ7PRR0vmHRvjRJ1BM191hObqGkQQGQqS1pJa2yN2n 1sMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9M6zSM2YI+mr2xkSzd7bZIJc0rGYX5w30o9ggxyac70=; b=4jQihP+P/P/ArfPIxruSFDmlcmhWu5fU6OVXnY1EzgbRglSI9CQx3uVl7o7b03CD+f wHMyLZI+Dyp0bjeLmtm+JuT53t0ht7apaTCtnUyZNzHEY6q0sQY6ORX9o/DTQnmha/+/ 4KVrGQPxDQpfbm36Cm3+i0aVt5JsHgWZDwyZBa3/BJZy+b3blrwI6g9T+k/5gbg4qh1G rfKvC/252H14rLgRMe30mE0GMgumuUB5dG+/9LL4x+KDDa/jZwQf9Vxv78OY8VPoaP4O wSgHFeLzDdQJfDT50YdGLhixHAHfj6OHZE5v59Qsc0VRGEH2hV4DPbdfMrVPeqhlWwLq fwOw== X-Gm-Message-State: ACrzQf0Y+9Ms3SkUvGWTs+Ax5ba3Krs3tjKYjJ4xDHlMdbWMBYxe9n9y xKxkI0FgbgUPgJaV2gEVM5NbStVzfYYJDHDPIXQ= X-Google-Smtp-Source: AMsMyM57cwzckKxi0qqdRvUuUq1kK1M5roDiPHZ0/++9LYQbPOmDIfnGPngZDvxbepfRUyjgpxvLn8wXUyzhmmps/zY= X-Received: by 2002:a05:6512:12c5:b0:4a2:6c32:5c5e with SMTP id p5-20020a05651212c500b004a26c325c5emr211949lfg.464.1666061568857; Mon, 17 Oct 2022 19:52:48 -0700 (PDT) MIME-Version: 1.0 References: <1665725448-31439-1-git-send-email-zhaoyang.huang@unisoc.com> In-Reply-To: From: Zhaoyang Huang Date: Tue, 18 Oct 2022 10:52:19 +0800 Message-ID: Subject: Re: [RFC PATCH] mm: move xa forward when run across zombie page To: Matthew Wilcox Cc: "zhaoyang.huang" , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, ke.wang@unisoc.com, steve.kang@unisoc.com, baocong.liu@unisoc.com, linux-fsdevel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 17, 2022 at 11:55 PM Matthew Wilcox wrote: > > On Mon, Oct 17, 2022 at 01:34:13PM +0800, Zhaoyang Huang wrote: > > On Fri, Oct 14, 2022 at 8:12 PM Matthew Wilcox wrote: > > > > > > On Fri, Oct 14, 2022 at 01:30:48PM +0800, zhaoyang.huang wrote: > > > > From: Zhaoyang Huang > > > > > > > > Bellowing RCU stall is reported where kswapd traps in a live lock when shrink > > > > superblock's inode list. The direct reason is zombie page keeps staying on the > > > > xarray's slot and make the check and retry loop permanently. The root cause is unknown yet > > > > and supposed could be an xa update without synchronize_rcu etc. I would like to > > > > suggest skip this page to break the live lock as a workaround. > > > > > > No, the underlying bug should be fixed. > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Understand. IMHO, find_get_entry actruely works as an open API dealing with different kinds of address_spaces page cache, which requires high robustness to deal with any corner cases. Take the current problem as example, the inode with fault page(refcount=0) could remain on the sb's list without live lock problem. > > > OK, could I move the xas like below? > > > > + if (!folio_try_get_rcu(folio)) { > > + xas_next_offset(xas); > > goto reset; > > + }