From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Gavin Hu (Arm Technology China)" Subject: Re: [PATCH v2] ring: enforce reading the tails before ring operations Date: Sat, 9 Mar 2019 10:28:57 +0000 Message-ID: References: <1551841661-42892-1-git-send-email-gavin.hu@arm.com> <2601191342CEEE43887BDE71AB9772580136556F40@irsmsx105.ger.corp.intel.com> <2456717.RLOWIjrx09@xps> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Cc: Ilya Maximets , "dev@dpdk.org" , nd , "jerinj@marvell.com" , "hemant.agrawal@nxp.com" , "Nipun.gupta@nxp.com" , "olivier.matz@6wind.com" , "Richardson, Bruce" , "chaozhu@linux.vnet.ibm.com" , nd To: Honnappa Nagarahalli , "thomas@monjalon.net" , "Ananyev, Konstantin" Return-path: Received: from EUR04-VI1-obe.outbound.protection.outlook.com (mail-eopbgr80070.outbound.protection.outlook.com [40.107.8.70]) by dpdk.org (Postfix) with ESMTP id DCB3E201 for ; Sat, 9 Mar 2019 11:28:59 +0100 (CET) In-Reply-To: Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Honnappa Nagarahalli > Sent: Saturday, March 9, 2019 7:48 AM > To: thomas@monjalon.net; Ananyev, Konstantin > ; Gavin Hu (Arm Technology China) > > Cc: Ilya Maximets ; dev@dpdk.org; nd > ; jerinj@marvell.com; hemant.agrawal@nxp.com; > Nipun.gupta@nxp.com; olivier.matz@6wind.com; Richardson, Bruce > ; chaozhu@linux.vnet.ibm.com; nd > > Subject: RE: [PATCH v2] ring: enforce reading the tails before ring > operations >=20 > > 08/03/2019 16:50, Ananyev, Konstantin: > > > 08/03/2019 16:05, Gavin Hu (Arm Technology China): > > > > Anyway, on x86, smp_rmb, as a compiler barrier, applies to > load/store, not > > only load/load. > > > > > > Yes, that's true, but I think that's happened by coincidence, not > > > intentionally. > > > > > > > This is the case also for arm, arm64, ppc32, ppc64. > > > > I will submit a patch to expand the definition of this API. > > > > > > I understand your intention, but that does mean we would also need > to > > > change not only rte_smp_rmb() but rte_rmb() too (to keep things > consistent)? > > > That sounds worring. > > > Might be better to keep smp_rmb() definition as it is, and introduce > > > new function that fits your purposes (smp_rwmb or > smp_load_store_barrier)? > Looking at rte_rmb, rte_io_rmb, rte_cio_rmb implementations for Arm, > they all provide load/store barrier as well. If other architectures also > provide load/store barrier with rte_xxx_rmb, then we could extend the > meaning of the existing APIs. Further looking at rte_rmb, rte_io_rmb, rte_cio_rmb implementations for PPC= 64 and x86, They also provide load/store barrier. It is safe to extend the meaning of t= he existing rte_XXX_rmb API. >=20 > Even if a new API is provided, we need to do provide the same APIs for IO > and CIO variants. Since rte_XXX_rmbs API for all architectures already provide the desired lo= ad/store ordering, a new API is redundant and not needed.=20 > > > > How is it managed in other projects? > In my experience, I usually have been changing the algorithms to use C11 > memory model. So, I have not come across this issue yet. Others can > comment. >=20 > >