From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linutronix.de (193.142.43.55:993) by crypto-ml.lab.linutronix.de with IMAP4-SSL for ; 12 Oct 2019 01:47:00 -0000 Received: from mga18.intel.com ([134.134.136.126]) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1iJ6Uk-00048e-Hf for speck@linutronix.de; Sat, 12 Oct 2019 03:46:59 +0200 Date: Fri, 11 Oct 2019 18:41:02 -0700 From: Pawan Gupta Subject: [MODERATED] Re: [PATCH v6 9/9] TAAv6 9 Message-ID: <20191012014102.GA22297@guptapadev.amr> References: <5d9e6f13.1c69fb81.d7036.be99SMTPIN_ADDED_BROKEN@mx.google.com> <20191010065412.GD256999@kroah.com> MIME-Version: 1.0 In-Reply-To: <20191010065412.GD256999@kroah.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit To: speck@linutronix.de List-ID: On Thu, Oct 10, 2019 at 08:54:12AM +0200, speck for Greg KH wrote: > > +static DEFINE_MUTEX(tsx_mutex); > > I still don't know what this is trying to "protect". Please at least > document it so I have a chance to review it... I will add these comments to the code: /* * Protect tsx_ctrl_state and TSX update on_each_cpu() from concurrent * writers. * * - Serialize TSX_CTRL MSR writes across all CPUs when there are * concurrent sysfs requests. on_each_cpu() callback execution * order on other CPUs can be different for multiple calls to * on_each_cpu(). For conflicting concurrent sysfs requests the * lock ensures all CPUs have updated the TSX_CTRL MSR before the * next call to on_each_cpu(). * - Serialize tsx_ctrl_state update so that it doesn't get out of * sync with TSX_CTRL MSR. * - Serialize update to taa_mitigation. */ > > +/* Take tsx_mutex lock and update tsx_ctrl_state when calling this function */ > > +static void tsx_update_on_each_cpu(bool val) > > +{ > > + get_online_cpus(); > > + on_each_cpu(tsx_update_this_cpu, (void *)val, 1); > > + put_online_cpus(); > > +} > > Why take the lock? This is only called in one place. So that TSX_CTRL MSR state stays consistent across all CPUs between multiple on_each_cpu() calls. Otherwise overlapping conflicting TSX_CTRL MSR writes could end up in some CPUs with TSX enabled and others with TSX disabled. > > +ssize_t hw_tx_mem_show(struct device *dev, struct device_attribute *attr, > > + char *buf) > > +{ > > + return sprintf(buf, "%d\n", tsx_ctrl_state == TSX_CTRL_ENABLE ? 1 : 0); > > +} > > + > > +ssize_t hw_tx_mem_store(struct device *dev, struct device_attribute *attr, > > + const char *buf, size_t count) > > +{ > > + enum tsx_ctrl_states requested_state; > > + ssize_t ret; > > + bool val; > > + > > + ret = kstrtobool(buf, &val); > > + if (ret) > > + return ret; > > + > > + mutex_lock(&tsx_mutex); > > + > > + if (val) > > + requested_state = TSX_CTRL_ENABLE; > > + else > > + requested_state = TSX_CTRL_DISABLE; > > Why is lock grabbed above and not here? Yes, can be moved here. Thanks, Pawan