From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37BB5C433F5 for ; Wed, 5 Jan 2022 14:16:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1F6A06B0071; Wed, 5 Jan 2022 09:16:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1A6C46B0073; Wed, 5 Jan 2022 09:16:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 06EAD6B0074; Wed, 5 Jan 2022 09:16:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0177.hostedemail.com [216.40.44.177]) by kanga.kvack.org (Postfix) with ESMTP id E7F9C6B0071 for ; Wed, 5 Jan 2022 09:16:56 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id AE621181DE57F for ; Wed, 5 Jan 2022 14:16:56 +0000 (UTC) X-FDA: 78996434832.25.CF9806B Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf14.hostedemail.com (Postfix) with ESMTP id 7573A100011 for ; Wed, 5 Jan 2022 14:16:54 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id C52D2210E7; Wed, 5 Jan 2022 14:16:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1641392214; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=+WLxA9/1x+7qLb1Bx+bH8yfotWhgKxpBD6Y91WrBPbQ=; b=PAV1YMrbk39la/pf4jUdMUxaJx7luCh/1FN7SIUMJSyUYitC0PrA9E1VgkCh4UraOVmpCW +8BmyHXXMxdMUUZt92yHwdC6UfUeiVDIiLcYDZZKA7x/dKlQYzcwlFtdozw5tMUSedbc0s dEWuIryro2F81voSW5mEdShvjMtk74A= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9E0D513BE3; Wed, 5 Jan 2022 14:16:54 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id is55JVao1WHESwAAMHmgww (envelope-from ); Wed, 05 Jan 2022 14:16:54 +0000 Date: Wed, 5 Jan 2022 15:16:53 +0100 From: Michal =?iso-8859-1?Q?Koutn=FD?= To: Sebastian Andrzej Siewior Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Thomas Gleixner , Waiman Long , Peter Zijlstra Subject: Re: [RFC PATCH 1/3] mm/memcg: Protect per-CPU counter by disabling preemption on PREEMPT_RT Message-ID: <20220105141653.GA6464@blackbody.suse.cz> References: <20211222114111.2206248-1-bigeasy@linutronix.de> <20211222114111.2206248-2-bigeasy@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20211222114111.2206248-2-bigeasy@linutronix.de> User-Agent: Mutt/1.10.1 (2018-07-13) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=PAV1YMrb; spf=pass (imf14.hostedemail.com: domain of mkoutny@suse.com designates 195.135.220.28 as permitted sender) smtp.mailfrom=mkoutny@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 7573A100011 X-Stat-Signature: jf8chtregimf3oicczqfu9mfog7yqhrt X-HE-Tag: 1641392214-712883 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Dec 22, 2021 at 12:41:09PM +0100, Sebastian Andrzej Siewior wrote: > The sections with disabled preemption must exclude > memcg_check_events() so that spinlock_t locks can still be acquired > (for instance in eventfd_signal()). The resulting construct in uncharge_batch() raises eybrows. If you can decouple per-cpu updates from memcg_check_events() on PREEMPT_RT, why not tackle it the same way on !PREEMPT_RT too (and have just one variant of the block)? (Actually, it doesn't seem to me that memcg_check_events() can be extracted like this from the preempt disabled block since mem_cgroup_event_ratelimit() relies on similar RMW pattern. Things would be simpler if PREEMPT_RT didn't allow the threshold event handlers (akin to Michal Hocko's suggestion of rejecting soft limit).) Thanks, Michal From mboxrd@z Thu Jan 1 00:00:00 1970 From: Michal =?iso-8859-1?Q?Koutn=FD?= Subject: Re: [RFC PATCH 1/3] mm/memcg: Protect per-CPU counter by disabling preemption on PREEMPT_RT Date: Wed, 5 Jan 2022 15:16:53 +0100 Message-ID: <20220105141653.GA6464@blackbody.suse.cz> References: <20211222114111.2206248-1-bigeasy@linutronix.de> <20211222114111.2206248-2-bigeasy@linutronix.de> Mime-Version: 1.0 Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1641392214; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=+WLxA9/1x+7qLb1Bx+bH8yfotWhgKxpBD6Y91WrBPbQ=; b=PAV1YMrbk39la/pf4jUdMUxaJx7luCh/1FN7SIUMJSyUYitC0PrA9E1VgkCh4UraOVmpCW +8BmyHXXMxdMUUZt92yHwdC6UfUeiVDIiLcYDZZKA7x/dKlQYzcwlFtdozw5tMUSedbc0s dEWuIryro2F81voSW5mEdShvjMtk74A= Content-Disposition: inline In-Reply-To: <20211222114111.2206248-2-bigeasy-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org> List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Sebastian Andrzej Siewior Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Thomas Gleixner , Waiman Long , Peter Zijlstra On Wed, Dec 22, 2021 at 12:41:09PM +0100, Sebastian Andrzej Siewior wrote: > The sections with disabled preemption must exclude > memcg_check_events() so that spinlock_t locks can still be acquired > (for instance in eventfd_signal()). The resulting construct in uncharge_batch() raises eybrows. If you can decouple per-cpu updates from memcg_check_events() on PREEMPT_RT, why not tackle it the same way on !PREEMPT_RT too (and have just one variant of the block)? (Actually, it doesn't seem to me that memcg_check_events() can be extracted like this from the preempt disabled block since mem_cgroup_event_ratelimit() relies on similar RMW pattern. Things would be simpler if PREEMPT_RT didn't allow the threshold event handlers (akin to Michal Hocko's suggestion of rejecting soft limit).) Thanks, Michal