From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gavin Hu Subject: Re: [PATCH v2] ring: fix c11 memory ordering issue Date: Tue, 7 Aug 2018 07:56:08 +0000 Message-ID: References: <20180806011805.7857-1-gavin.hu@arm.com> <20180807031943.5331-1-gavin.hu@arm.com> <88b4421b63ec4802892f3fc40d457f03@HXTBJIDCEMVIW01.hxtcorp.net> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable Cc: Honnappa Nagarahalli , Steve Capper , Ola Liljedahl , "jerin.jacob@caviumnetworks.com" , "hemant.agrawal@nxp.com" , "stable@dpdk.org" To: "He, Jia" , "dev@dpdk.org" Return-path: In-Reply-To: <88b4421b63ec4802892f3fc40d457f03@HXTBJIDCEMVIW01.hxtcorp.net> Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Jia, Thanks for your feedback, let's see if there are requests from others to sp= lit the fix. The is a race condition between updating the tail and getting free/avail_en= tries, which is dependent on the tails. Which should be synchronized by load-acquire and store-release. In simple w= ords below, step #1 and #5 should be synchronized with each other, mutually= , otherwise the free_/avail_entries calculation possibly get wrong. On each locre, either enque or deque, step #1 and #2 order should be mainta= ined as #2 has dependency on #1, That's why Acquire ordering is necessary. Please raise new questions if I don't get across this clearly. Ring enqueue / lcore #0 Ring deque / lcore #1 1. load-acquire prod_tail 1. Load-acquire cons_tail 2. get free_entries 2. Get avail_entries 3. move prod_head accordingly 3. Move cons_head accordingly 4. do enqueue operations 4. Do dequeue operations 5. store-release prod_tail 5. Store-release cons_tail Best Regards, Gavin -----Original Message----- From: He, Jia Sent: Tuesday, August 7, 2018 1:57 PM To: Gavin Hu ; dev@dpdk.org Cc: Honnappa Nagarahalli ; Steve Capper ; Ola Liljedahl ; jerin.jacob@caviu= mnetworks.com; hemant.agrawal@nxp.com; stable@dpdk.org Subject: RE: [PATCH v2] ring: fix c11 memory ordering issue Hi Gavin > -----Original Message----- > From: Gavin Hu [mailto:gavin.hu@arm.com] > Sent: 2018=1B$BG/=1B(B8=1B$B7n=1B(B7=1B$BF|=1B(B 11:20 > To: dev@dpdk.org > Cc: gavin.hu@arm.com; Honnappa.Nagarahalli@arm.com; > steve.capper@arm.com; Ola.Liljedahl@arm.com; > jerin.jacob@caviumnetworks.com; hemant.agrawal@nxp.com; He, Jia > ; stable@dpdk.org > Subject: [PATCH v2] ring: fix c11 memory ordering issue > > This patch includes two bug fixes(#1 and 2) and two optimisations(#3 and = #4). Maybe you need to split this into small parts. > 1) In update_tail, read ht->tail using __atomic_load.Although the > compiler currently seems to be doing the right thing even without > _atomic_load, we don't want to give the compiler freedom to optimise > what should be an atomic load, it should not be arbitarily moved > around. > 2) Synchronize the load-acquire of the tail and the store-release > within update_tail, the store release ensures all the ring operations, > engqueu or dequeue are seen by the observers as soon as they see > the updated tail. The load-acquire is required for correctly compu- > tate the free_entries or avail_entries, respectively for enqueue and > dequeue operations, the data dependency is not reliable for ordering > as the compiler might break it by saving to temporary values to boost > performance. Could you describe the race condition in details? e.g. cpu 1cpu2 code1 code2 Cheers, Jia > 3) In __rte_ring_move_prod_head, move the __atomic_load_n up and out of > the do {} while loop as upon failure the old_head will be updated, > another load is costy and not necessary. > 4) When calling __atomic_compare_exchange_n, relaxed ordering for both > success and failure, as multiple threads can work independently on > the same end of the ring (either enqueue or dequeue) without > synchronization, not as operating on tail, which has to be finished > in sequence. > > The patch was benchmarked with test/ring_perf_autotest and it > decreases the enqueue/dequeue latency by 5% ~ 24.6% with two lcores, > the real gains are dependent on the number of lcores, depth of the ring, = SPSC or MPMC. > For 1 lcore, it also improves a little, about 3 ~ 4%. > It is a big improvement, in case of MPMC, with rings size of 32, it > saves latency up to (6.90-5.20)/6.90 =3D 24.6%. > > Test result data: > > SP/SC bulk enq/dequeue (size: 8): 13.19 MP/MC bulk enq/dequeue (size: > 8): 25.79 SP/SC bulk enq/dequeue (size: 32): 3.85 MP/MC bulk > enq/dequeue (size: 32): 6.90 > > SP/SC bulk enq/dequeue (size: 8): 12.05 MP/MC bulk enq/dequeue (size: > 8): 23.06 SP/SC bulk enq/dequeue (size: 32): 3.62 MP/MC bulk > enq/dequeue (size: 32): 5.20 > > Fixes: 39368ebfc6 ("ring: introduce C11 memory model barrier option") > Cc: stable@dpdk.org > > Signed-off-by: Gavin Hu > Reviewed-by: Honnappa Nagarahalli > Reviewed-by: Steve Capper > Reviewed-by: Ola Liljedahl > --- > lib/librte_ring/rte_ring_c11_mem.h | 38 > +++++++++++++++++++++++++------------- > 1 file changed, 25 insertions(+), 13 deletions(-) > > diff --git a/lib/librte_ring/rte_ring_c11_mem.h > b/lib/librte_ring/rte_ring_c11_mem.h > index 94df3c4a6..cfa3be4a7 100644 > --- a/lib/librte_ring/rte_ring_c11_mem.h > +++ b/lib/librte_ring/rte_ring_c11_mem.h > @@ -21,7 +21,8 @@ update_tail(struct rte_ring_headtail *ht, uint32_t > old_val, uint32_t new_val, > * we need to wait for them to complete > */ > if (!single) > -while (unlikely(ht->tail !=3D old_val)) > +while (unlikely(old_val !=3D __atomic_load_n(&ht->tail, > +__ATOMIC_RELAXED))) > rte_pause(); > > __atomic_store_n(&ht->tail, new_val, __ATOMIC_RELEASE); @@ -60,20 > +61,24 @@ __rte_ring_move_prod_head(struct rte_ring *r, unsigned int > +is_sp, > unsigned int max =3D n; > int success; > > +*old_head =3D __atomic_load_n(&r->prod.head, __ATOMIC_RELAXED); > do { > /* Reset n to the initial burst count */ > n =3D max; > > -*old_head =3D __atomic_load_n(&r->prod.head, > -__ATOMIC_ACQUIRE); > > -/* > - * The subtraction is done between two unsigned 32bits value > +/* load-acquire synchronize with store-release of ht->tail > + * in update_tail. > + */ > +const uint32_t cons_tail =3D __atomic_load_n(&r->cons.tail, > +__ATOMIC_ACQUIRE); > + > +/* The subtraction is done between two unsigned 32bits value > * (the result is always modulo 32 bits even if we have > * *old_head > cons_tail). So 'free_entries' is always between 0 > * and capacity (which is < size). > */ > -*free_entries =3D (capacity + r->cons.tail - *old_head); > +*free_entries =3D (capacity + cons_tail - *old_head); > > /* check that we have enough room in ring */ > if (unlikely(n > *free_entries)) > @@ -87,9 +92,10 @@ __rte_ring_move_prod_head(struct rte_ring *r, > unsigned int is_sp, > if (is_sp) > r->prod.head =3D *new_head, success =3D 1; > else > +/* on failure, *old_head is updated */ > success =3D __atomic_compare_exchange_n(&r->prod.head, > old_head, *new_head, > -0, __ATOMIC_ACQUIRE, > +/*weak=3D*/0, __ATOMIC_RELAXED, > __ATOMIC_RELAXED); > } while (unlikely(success =3D=3D 0)); > return n; > @@ -128,18 +134,23 @@ __rte_ring_move_cons_head(struct rte_ring *r, > int is_sc, > int success; > > /* move cons.head atomically */ > +*old_head =3D __atomic_load_n(&r->cons.head, __ATOMIC_RELAXED); > do { > /* Restore n as it may change every loop */ > n =3D max; > -*old_head =3D __atomic_load_n(&r->cons.head, > -__ATOMIC_ACQUIRE); > + > +/* this load-acquire synchronize with store-release of ht->tail > + * in update_tail. > + */ > +const uint32_t prod_tail =3D __atomic_load_n(&r->prod.tail, > +__ATOMIC_ACQUIRE); > > /* The subtraction is done between two unsigned 32bits value > * (the result is always modulo 32 bits even if we have > * cons_head > prod_tail). So 'entries' is always between 0 > * and size(ring)-1. > */ > -*entries =3D (r->prod.tail - *old_head); > +*entries =3D (prod_tail - *old_head); > > /* Set the actual entries for dequeue */ > if (n > *entries) > @@ -152,10 +163,11 @@ __rte_ring_move_cons_head(struct rte_ring *r, > int is_sc, > if (is_sc) > r->cons.head =3D *new_head, success =3D 1; > else > +/* on failure, *old_head will be updated */ > success =3D __atomic_compare_exchange_n(&r->cons.head, > -old_head, *new_head, > -0, __ATOMIC_ACQUIRE, > -__ATOMIC_RELAXED); > +old_head, *new_head, > +/*weak=3D*/0, __ATOMIC_RELAXED, > +__ATOMIC_RELAXED); > } while (unlikely(success =3D=3D 0)); > return n; > } > -- > 2.11.0 IMPORTANT NOTICE: The contents of this email and any attachments are confid= ential and may also be privileged. If you are not the intended recipient, p= lease notify the sender immediately and do not disclose the contents to any= other person, use it for any purpose, or store or copy the information in = any medium. Thank you.