From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60EEDC433FE for ; Tue, 1 Nov 2022 15:23:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230122AbiKAPXa (ORCPT ); Tue, 1 Nov 2022 11:23:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229601AbiKAPX1 (ORCPT ); Tue, 1 Nov 2022 11:23:27 -0400 Received: from mail-yb1-xb2f.google.com (mail-yb1-xb2f.google.com [IPv6:2607:f8b0:4864:20::b2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 233602646 for ; Tue, 1 Nov 2022 08:23:26 -0700 (PDT) Received: by mail-yb1-xb2f.google.com with SMTP id y72so17679990yby.13 for ; Tue, 01 Nov 2022 08:23:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=u1N5tYpxw2aXjVjz+sGqWmvGf2bg3xKcmvo0zAlLrkY=; b=mEK5byf6XxiAPM/ZDbvChCtZLllAi6Tt35yjm4V49LARv/+yjdp/PNq40pTDHTdFuN WsEXuxms8qqkc75t41FFbppkDcxPPo7mCXLCH3LhyPTMONzM2k3E9K4jA5orWrkjzUG8 U2kaKv51dGtAACa8mm5VUgUCkeQ3xp5DzVB6WFu6CEooTIu73WG2yKA96eSwZPC9GGFY l9h/0hkfU0eXy137bEP+fNUq+70ENyKC9BBpctkt4+zvFwHB3D3UjiS+t0Pw4rFEYntu Dd76fVN4TjQdNj63PQjr6pEg1PsJjr4McLS/7rejEhe/oNuCZYO1G+kT83etmp0bbDZ1 DpiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=u1N5tYpxw2aXjVjz+sGqWmvGf2bg3xKcmvo0zAlLrkY=; b=zdtqXZpKrmztyON/O0dlxLuGPCk3mFqXz0326qaOBcht8Pz3XQZAx6sRCTZCorDsUK yWwReSgLZ8bHySOfKwky8HRqBilWtsO0tLuAkxfGVXEZLRdRrN75lZbfuajNNFuTYj5A z+ORLKSRoC9JHoJ9vR1Jmyk2OjdTwheqAIfgOmtpqYSrwsy1R3YGRh2Cu0dl74HUvZHv nRVHWPcW5lBk7AzUKNp0DOqfk9eaq/ricpPer47KKmhSWZI79M0QkEpjluGiPM1WOpFg 1Cfn7menOjYajNppj73+3bVomdcTmiCsaYnTw3WjB5Kl4RBrCnFPKjz+mXeXiptlXOyt EPSw== X-Gm-Message-State: ACrzQf2/irUgco/PbyngD91/b1E5roWqkK812JcRBwy1FF9zJLmAywSq lbrg1yUqR485pd0pUgNpAS6m7fVRl1F7Y6FRu6fJaw== X-Google-Smtp-Source: AMsMyM6o4xtpjXa1l0Xfdsj7ntBlbKyV+WmYfUKqCObsYAJt7wAp/bDQr8TWS7ArbqxZwWfPgqc0tdJknyPJR9sLe+Q= X-Received: by 2002:a5b:443:0:b0:6bc:e3d1:8990 with SMTP id s3-20020a5b0443000000b006bce3d18990mr19779529ybp.191.1667316205266; Tue, 01 Nov 2022 08:23:25 -0700 (PDT) MIME-Version: 1.0 References: <81a7b4f6-fbb5-380e-532d-f2c1fc49b515@intel.com> <76bb4dc9-ab7c-4cb6-d1bf-26436c88c6e2@arm.com> <835d769b-3662-7be5-dcdd-804cb1f3999a@arm.com> <715e4123-fdb3-a71e-4069-91d16a56a308@arm.com> <317b4a96-f28d-aab5-57cc-f0222b7e4901@intel.com> <08c0e91a-a17a-5dad-0638-800a4db5034f@intel.com> In-Reply-To: <08c0e91a-a17a-5dad-0638-800a4db5034f@intel.com> From: Peter Newman Date: Tue, 1 Nov 2022 16:23:13 +0100 Message-ID: Subject: Re: [RFD] resctrl: reassigning a running container's CTRL_MON group To: Reinette Chatre Cc: James Morse , Tony Luck , "Yu, Fenghua" , "Eranian, Stephane" , "linux-kernel@vger.kernel.org" , Thomas Gleixner , Babu Moger , Gaurang Upasani Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Reinette, On Thu, Oct 27, 2022 at 7:36 PM Reinette Chatre wrote: > On 10/27/2022 12:56 AM, Peter Newman wrote: > > On Wed, Oct 26, 2022 at 11:12 PM Reinette Chatre > > wrote: > >> The original concern is "the stores to t->closid and t->rmid could be > >> reordered with the task_curr(t) and task_cpu(t) reads which follow". I can see > >> that issue. Have you considered using the compiler barrier, barrier(), instead? > >> From what I understand it will prevent the compiler from moving the memory accesses. > >> This is what is currently done in __rdtgroup_move_task() and could be done here also? > > > > A memory system (including those on x86) is allowed to reorder a store with a > > later load, in addition to the compiler. > > > > Also because the locations in question can be concurrently accessed by another > > CPU, a compiler barrier would not be sufficient. > > This is hard. Regarding the concurrent access from another CPU it seems > that task_rq_lock() is available to prevent races with schedule(). Using this > may be able to prevent task_curr(t) changing during this time and thus the local > reordering may not be a problem. I am not familiar with task_rq_lock() though, > surely there are many details to consider in this area. Yes it looks like the task's rq_lock would provide the necessary ordering. It's not feasible to ensure the IPI arrives before the target task migrates away, but the task would need to obtain the same lock in order to migrate off of its current CPU, so that alone would ensure the next migration would observe the updates. The difficulty is this lock is private to sched/, so I'd have to propose some API. It would make sense for the API to return the result of task_curr(t) and task_cpu(t) to the caller to avoid giving the impression that this function would be useful for anything other than helping someone do an smp_call_function targeting a task's CPU. I'll just have to push a patch and see what people say. -Peter