From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE29DC761AF for ; Wed, 29 Mar 2023 18:53:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229851AbjC2SxI (ORCPT ); Wed, 29 Mar 2023 14:53:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229519AbjC2SxH (ORCPT ); Wed, 29 Mar 2023 14:53:07 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26B2559EA; Wed, 29 Mar 2023 11:53:05 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id om3-20020a17090b3a8300b0023efab0e3bfso19531006pjb.3; Wed, 29 Mar 2023 11:53:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1680115984; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:from:to:cc:subject:date:message-id :reply-to; bh=Iy+lCmVSfuCqZYe5m364REzCKXPfLlS/vXZ6Sv33Scc=; b=mOxPMiQc+w/5rfTRi9mFKJkmmWrWC+fsE86SxcySPbUJ/wuSW6AldTRs3OGXZtwHWv n75hgfb8Jx+JCDbIxrNwCNhfHbchPfWDoQgDmRZL3hRgD1A1QjHOFPi/idbqob30n3+o RfxqdtblWXtXCCQwP77ubsHpTykfAuRiiY2lmCEIRhp/1xZEWZJAWmKKb366YikERsZy b49+3GyAT5iO8CVbUWnbgaKvTSxLC1ZUui2x/A7empu+BSJ09ps4+rHLpazMy4Z/WTbC 7GovkOPkWDIYzMqS73vSWspbX6vZMVT+mWaCOhZRpJ0uPCggyCfuI4KlNvWHIo+z094h 3zKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680115984; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Iy+lCmVSfuCqZYe5m364REzCKXPfLlS/vXZ6Sv33Scc=; b=OiCOgpSwKRPXnJdl/k74jw/fxsccj83OVUE6IQJo/PRmjtvM8A1TE/4K+AUeRYTSHP bOLXbodQmU6ftTXcBddXOMbvoDOkf4TKvEvfKoTG4UXlku4YuQCxFPKpz3yONGrl51+G Y13+4n18kBhl1+Yk8w2GU3bq+rpKoE/46rTRMH+aBkEYG1gxT6YSKofHACELqNt/uKGh uE38U8MKg2ykE67rNoaEs7VLznoXXRz6xYQzZEtsvQY8yggOEn3Eq0kCj7K1fJup+4cG WTIrD7cP+czk1V/wBz33JQ0oEk0H6TcM49sv868pJmriLLXoTao/JToQyJWSpH9shz1b A3tw== X-Gm-Message-State: AAQBX9fM25QTDwrSu5o12/ZWqihxmP0qMNm8oQP1do76wMhqqgMbR/6W 6Nea1e/s61SFZsCVY3CtNQ8= X-Google-Smtp-Source: AKy350YM1IDiFMeIxntBJ5tUubr/59hvq1vaUMPy4AplOEpt3OuDM5rihLjPuie4/yTJFSttNogBlw== X-Received: by 2002:a17:903:743:b0:1a1:cd69:d301 with SMTP id kl3-20020a170903074300b001a1cd69d301mr17833746plb.68.1680115984286; Wed, 29 Mar 2023 11:53:04 -0700 (PDT) Received: from localhost (2603-800c-1a02-1bae-a7fa-157f-969a-4cde.res6.spectrum.com. [2603:800c:1a02:1bae:a7fa:157f:969a:4cde]) by smtp.gmail.com with ESMTPSA id g6-20020a1709026b4600b001a19cf1b37esm23311756plt.40.2023.03.29.11.53.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 Mar 2023 11:53:03 -0700 (PDT) Sender: Tejun Heo Date: Wed, 29 Mar 2023 08:53:02 -1000 From: Tejun Heo To: Yosry Ahmed Cc: Shakeel Butt , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Andrew Morton , Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org Subject: Re: [RFC PATCH 1/7] cgroup: rstat: only disable interrupts for the percpu lock Message-ID: References: <20230323040037.2389095-1-yosryahmed@google.com> <20230323040037.2389095-2-yosryahmed@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Hello, Yosry. On Mon, Mar 27, 2023 at 04:23:13PM -0700, Yosry Ahmed wrote: > Tejun, if having the lock be non-irq is a non-starter for you, I can This is an actual hazard. We see in prod these unprotected locks causing very big spikes in tail latencies and they can be tricky to root cause too and given the way rstat lock is used it's highly likely to be involved in those scenarios with the proposed change, so it's gonna be a nack from my end. > send a patch that instead gives up the lock and reacquires it at every > CPU boundary unconditionally -- or perhaps every N CPU boundaries to > avoid excessively releasing and reacquiring the lock. I'd just do the simple thing and see whether there's any perf penalty before making it complicated. I'd be pretty surprised if unlocking and relocking the same spinlock adds any noticeable overhead here. Thanks. -- tejun