From: "Zheng, Lv" <lv.zheng@intel.com> To: Borislav Petkov <bp@alien8.de> Cc: linux-edac <linux-edac@vger.kernel.org>, Jiri Kosina <jkosina@suse.cz>, Borislav Petkov <bp@suse.de>, "Rafael J. Wysocki" <rjw@rjwysocki.net>, Len Brown <lenb@kernel.org>, "Luck, Tony" <tony.luck@intel.com>, Tomasz Nowicki <tomasz.nowicki@linaro.org>, "Chen, Gong" <gong.chen@linux.intel.com>, Wolfram Sang <wsa@the-dreams.de>, Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>, "linux-acpi@vger.kernel.org" <linux-acpi@vger.kernel.org>, "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org> Subject: RE: [RFC PATCH 5/5] GHES: Make NMI handler have a single reader Date: Thu, 30 Apr 2015 08:05:12 +0000 [thread overview] Message-ID: <1AE640813FDE7649BE1B193DEA596E8802712130@SHSMSX101.ccr.corp.intel.com> (raw) In-Reply-To: <20150429081355.GA5498@pd.tnic> Hi, > From: Borislav Petkov [mailto:bp@alien8.de] > Sent: Wednesday, April 29, 2015 4:14 PM > Subject: Re: [RFC PATCH 5/5] GHES: Make NMI handler have a single reader > > On Wed, Apr 29, 2015 at 12:49:59AM +0000, Zheng, Lv wrote: > > > > We absolutely want to use atomic_add_unless() because we get to save us > > > > the expensive > > > > > > > > LOCK; CMPXCHG > > > > > > > > if the value was already 1. Which is exactly what this patch is trying > > > > to avoid - a thundering herd of cores CMPXCHGing a global variable. > > > > > > IMO, on most architectures, the "cmp" part should work just like what you've done with "if". > > > And on some architectures, if the "xchg" doesn't happen, the "cmp" part even won't cause a pipe line hazard. > > Even if CMPXCHG is being split into several microops, they all still > need to flow down the pipe and require resources and tracking. And you > only know at retire time what the CMP result is and can "discard" the > XCHG part. Provided the uarch is smart enough to do that. > > This is probably why CMPXCHG needs 5,6,7,10,22,... cycles depending on > uarch and vendor, if I can trust Agner Fog's tables. And I bet those > numbers are best-case only and in real-life they probably tend to fall > out even worse. > > CMP needs only 1. On almost every uarch and vendor. And even that cycle > probably gets hidden with a good branch predictor. Are there any such data around the SC and LL (MIPS)? > > > If you man the LOCK prefix, I understand now. > > And that makes several times worse: 22, 40, 80, ... cycles. I'm OK if the code still keeps the readability then. Thanks and best regards -Lv > > -- > Regards/Gruss, > Boris. > > ECO tip #101: Trim your mails when you reply. > --
WARNING: multiple messages have this Message-ID (diff)
From: "Zheng, Lv" <lv.zheng@intel.com> To: Borislav Petkov <bp@alien8.de> Cc: linux-edac <linux-edac@vger.kernel.org>, Jiri Kosina <jkosina@suse.cz>, Borislav Petkov <bp@suse.de>, "Rafael J. Wysocki" <rjw@rjwysocki.net>, "Len Brown" <lenb@kernel.org>, "Luck, Tony" <tony.luck@intel.com>, Tomasz Nowicki <tomasz.nowicki@linaro.org>, "Chen, Gong" <gong.chen@linux.intel.com>, Wolfram Sang <wsa@the-dreams.de>, Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>, "linux-acpi@vger.kernel.org" <linux-acpi@vger.kernel.org>, "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org> Subject: RE: [RFC PATCH 5/5] GHES: Make NMI handler have a single reader Date: Thu, 30 Apr 2015 08:05:12 +0000 [thread overview] Message-ID: <1AE640813FDE7649BE1B193DEA596E8802712130@SHSMSX101.ccr.corp.intel.com> (raw) In-Reply-To: <20150429081355.GA5498@pd.tnic> [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain; charset="utf-8", Size: 1906 bytes --] Hi, > From: Borislav Petkov [mailto:bp@alien8.de] > Sent: Wednesday, April 29, 2015 4:14 PM > Subject: Re: [RFC PATCH 5/5] GHES: Make NMI handler have a single reader > > On Wed, Apr 29, 2015 at 12:49:59AM +0000, Zheng, Lv wrote: > > > > We absolutely want to use atomic_add_unless() because we get to save us > > > > the expensive > > > > > > > > LOCK; CMPXCHG > > > > > > > > if the value was already 1. Which is exactly what this patch is trying > > > > to avoid - a thundering herd of cores CMPXCHGing a global variable. > > > > > > IMO, on most architectures, the "cmp" part should work just like what you've done with "if". > > > And on some architectures, if the "xchg" doesn't happen, the "cmp" part even won't cause a pipe line hazard. > > Even if CMPXCHG is being split into several microops, they all still > need to flow down the pipe and require resources and tracking. And you > only know at retire time what the CMP result is and can "discard" the > XCHG part. Provided the uarch is smart enough to do that. > > This is probably why CMPXCHG needs 5,6,7,10,22,... cycles depending on > uarch and vendor, if I can trust Agner Fog's tables. And I bet those > numbers are best-case only and in real-life they probably tend to fall > out even worse. > > CMP needs only 1. On almost every uarch and vendor. And even that cycle > probably gets hidden with a good branch predictor. Are there any such data around the SC and LL (MIPS)? > > > If you man the LOCK prefix, I understand now. > > And that makes several times worse: 22, 40, 80, ... cycles. I'm OK if the code still keeps the readability then. Thanks and best regards -Lv > > -- > Regards/Gruss, > Boris. > > ECO tip #101: Trim your mails when you reply. > -- ÿôèº{.nÇ+·®+%Ëÿ±éݶ\x17¥wÿº{.nÇ+·¥{±þG«éÿ{ayº\x1dÊÚë,j\a¢f£¢·hïêÿêçz_è®\x03(éÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?¨èÚ&£ø§~á¶iOæ¬z·vØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?I¥
next prev parent reply other threads:[~2015-04-30 8:05 UTC|newest] Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top 2015-03-27 9:22 [RFC PATCH 0/5] GHES NMI handler cleanup Borislav Petkov 2015-03-27 9:22 ` [RFC PATCH 1/5] GHES: Carve out error queueing in a separate function Borislav Petkov 2015-03-27 9:22 ` [RFC PATCH 2/5] GHES: Carve out the panic functionality Borislav Petkov 2015-03-27 9:22 ` [RFC PATCH 3/5] GHES: Panic right after detection Borislav Petkov 2015-03-27 9:22 ` [RFC PATCH 4/5] GHES: Elliminate double-loop in the NMI handler Borislav Petkov 2015-03-27 9:22 ` [RFC PATCH 5/5] GHES: Make NMI handler have a single reader Borislav Petkov 2015-04-01 7:45 ` Jiri Kosina 2015-04-01 13:49 ` Borislav Petkov 2015-04-23 8:39 ` Jiri Kosina 2015-04-23 8:59 ` Borislav Petkov 2015-04-23 18:00 ` Luck, Tony 2015-04-23 18:00 ` Luck, Tony 2015-04-27 20:23 ` Borislav Petkov 2015-04-28 14:30 ` Don Zickus 2015-04-28 14:42 ` Don Zickus 2015-04-28 14:55 ` Borislav Petkov 2015-04-28 15:35 ` Don Zickus 2015-04-28 16:22 ` Borislav Petkov 2015-04-28 18:44 ` Don Zickus 2015-05-04 15:40 ` Borislav Petkov 2015-04-27 3:16 ` Zheng, Lv 2015-04-27 8:46 ` Borislav Petkov 2015-04-28 0:44 ` Zheng, Lv 2015-04-28 0:44 ` Zheng, Lv 2015-04-28 2:24 ` Zheng, Lv 2015-04-28 2:24 ` Zheng, Lv 2015-04-28 7:38 ` Borislav Petkov 2015-04-28 13:38 ` Zheng, Lv 2015-04-28 13:59 ` Borislav Petkov 2015-04-29 0:24 ` Zheng, Lv 2015-04-29 0:24 ` Zheng, Lv 2015-04-29 0:49 ` Zheng, Lv 2015-04-29 0:49 ` Zheng, Lv 2015-04-29 8:13 ` Borislav Petkov 2015-04-30 8:05 ` Zheng, Lv [this message] 2015-04-30 8:05 ` Zheng, Lv 2015-04-30 8:48 ` Borislav Petkov 2015-05-02 0:34 ` Zheng, Lv 2015-05-02 0:34 ` Zheng, Lv
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1AE640813FDE7649BE1B193DEA596E8802712130@SHSMSX101.ccr.corp.intel.com \ --to=lv.zheng@intel.com \ --cc=bp@alien8.de \ --cc=bp@suse.de \ --cc=gong.chen@linux.intel.com \ --cc=jkosina@suse.cz \ --cc=lenb@kernel.org \ --cc=linux-acpi@vger.kernel.org \ --cc=linux-edac@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=n-horiguchi@ah.jp.nec.com \ --cc=rjw@rjwysocki.net \ --cc=tomasz.nowicki@linaro.org \ --cc=tony.luck@intel.com \ --cc=wsa@the-dreams.de \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.