From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linutronix.de (193.142.43.55:993) by crypto-ml.lab.linutronix.de with IMAP4-SSL for ; 08 Oct 2019 06:07:47 -0000 Received: from mga09.intel.com ([134.134.136.24]) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1iHiev-0002iN-GT for speck@linutronix.de; Tue, 08 Oct 2019 08:07:46 +0200 Date: Mon, 7 Oct 2019 23:01:56 -0700 From: Pawan Gupta Subject: [MODERATED] Re: [PATCH v5 09/11] TAAv5 9 Message-ID: <20191008060156.GG5154@guptapadev.amr> References: <5d983ad2.1c69fb81.e6640.8f51SMTPIN_ADDED_BROKEN@mx.google.com> <20191006170646.GA147859@kroah.com> MIME-Version: 1.0 In-Reply-To: <20191006170646.GA147859@kroah.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit To: speck@linutronix.de List-ID: > > +static DEFINE_MUTEX(tsx_mutex); > > I think I asked this before, but in looking at the code I still can't > figure it out. What exactly is this protecting? > > It looks like you want to keep only one "writer" out of the sysfs store > function at a time, but: > > > +ssize_t hw_tx_mem_store(struct device *dev, struct device_attribute *attr, > > + const char *buf, size_t count) > > +{ > > + enum tsx_ctrl_states requested_state; > > + ssize_t ret; > > + bool val; > > + > > + ret = kstrtobool(buf, &val); > > + if (ret) > > + return ret; > > + > > + mutex_lock(&tsx_mutex); > > + > > + if (val) { > > + tsx_user_cmd = TSX_USER_CMD_ON; > > + requested_state = TSX_CTRL_ENABLE; > > + } else { > > + tsx_user_cmd = TSX_USER_CMD_OFF; > > + requested_state = TSX_CTRL_DISABLE; > > + } > > + > > + /* Current state is same as the reqested state, do nothing */ > > + if (tsx_ctrl_state == requested_state) > > + goto exit; > > + > > + tsx_ctrl_state = requested_state; > > + > > + tsx_update_on_each_cpu(val); > > +exit: > > + mutex_unlock(&tsx_mutex); > > What I think you want to do is just protect the tsx_update_on_each_cpu() > function, right? Also I believe below two operations needs to be under a lock. Without the lock if there are two writers and one is preempted in between these operations there is a possibility that tsx_ctrl_state and TSX hardware state could go out of sync. tsx_ctrl_state = requested_state; // 1st writer gets preempted here // 2nd writer flips tsx_ctrl_state and writes to the MSR. // 1st writer wakes up and only writes to the MSR tsx_update_on_each_cpu(val); // tsx_ctrl_state and hardware state would be different here. Chances of this happening is rare but still a possibility. The lock would prevent such a condition. > > So, you are locking _outside_ of a function call? That's a sure way to > madness over time. If this function is so special that it can not be > called multiple times at once, then put the lock _inside_ the function, > right? > > Otherwise you could have other places call that function, and this > single lock is not going to protect anything :( Future caller from other places should also take the lock and also update tsx_ctrl_state. Worth mentioning in the comment. > And why is tsx_user_cmd needed, and global? This was added because cmdline_find_option() uses __initdata and can't be called inside tsx_init() without it being an __init function. tsx_init() is in S3/S5 and cpu hotplug path so it can't be __init in its current callsite. With tsx=auto not translating to any of the tsx_ctrl_state state, I added tsx_user_cmd. Anyways, I will drop this. I hope it is okay to move tsx_init() to identify_boot_cpu() to avoid adding new enum and writing to globals from percpu function. This way tsx_init() is only called by boot cpu and secondary cpus call tsx_en/disable(). diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 6b25039aa8ae..a4ce9e3a622c 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1576,6 +1575,8 @@ void __init identify_boot_cpu(void) #endif cpu_detect_tlb(&boot_cpu_data); setup_cr_pinning(); + + tsx_init(); } diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index b1d6c96f6b88..96039b5cda7c 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -762,7 +762,10 @@ static void init_intel(struct cpuinfo_x86 *c) init_intel_misc_features(c); - tsx_init(c); + if (tsx_ctrl_state == TSX_CTRL_ENABLE) + tsx_enable(); + else if (tsx_ctrl_state == TSX_CTRL_DISABLE) + tsx_disable(); } ------------------ tsx_init() can now be __init and use cmdline_find_option(). void __init tsx_init(void) { char arg[20]; int ret; if (!tsx_ctrl_is_supported()) return; ret = cmdline_find_option(boot_command_line, "tsx", arg, sizeof(arg)); if (ret >= 0) { if (!strcmp(arg, "on")) { tsx_ctrl_state = TSX_CTRL_ENABLE; } else if (!strcmp(arg, "off")) { tsx_ctrl_state = TSX_CTRL_DISABLE; } else if (!strcmp(arg, "auto")) { if (boot_cpu_has_bug(X86_BUG_TAA)) tsx_ctrl_state = TSX_CTRL_DISABLE; else tsx_ctrl_state = TSX_CTRL_ENABLE; } else { tsx_ctrl_state = TSX_CTRL_DISABLE; pr_info("tsx: invalid option, defaulting to off\n"); } } else { /* tsx= not provided, defaulting to off */ tsx_ctrl_state = TSX_CTRL_DISABLE; } if (tsx_ctrl_state == TSX_CTRL_DISABLE) { tsx_disable(); setup_clear_cpu_cap(X86_FEATURE_RTM); } else if (tsx_ctrl_state == TSX_CTRL_ENABLE) { tsx_enable(); setup_force_cpu_cap(X86_FEATURE_RTM); } } I am hoping it is okay to use setup_clear_cpu_cap() in tsx_init(). Thanks, Pawan