From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Wang, Yipeng1" Subject: Re: [PATCH 2/4] hash: add memory ordering to avoid race conditions Date: Fri, 28 Sep 2018 00:43:15 +0000 Message-ID: References: <1536253938-192391-1-git-send-email-honnappa.nagarahalli@arm.com> <1536253938-192391-3-git-send-email-honnappa.nagarahalli@arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Cc: "dev@dpdk.org" , "gavin.hu@arm.com" , "steve.capper@arm.com" , "ola.liljedahl@arm.com" , "nd@arm.com" To: Honnappa Nagarahalli , "Richardson, Bruce" , "De Lara Guarch, Pablo" Return-path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id A54FA1B143 for ; Fri, 28 Sep 2018 02:43:21 +0200 (CEST) In-Reply-To: <1536253938-192391-3-git-send-email-honnappa.nagarahalli@arm.com> Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Some general comments for the various __atomic_store/load added, 1. Although it passes the compiler check, but I just want to confirm that i= f we should use GCC/clang builtins, or if There are higher level APIs in DPDK to do atomic operations?=20 2. We believe compiler will translate the atomic_store/load to regular MOV = instruction on Total Store Order architecture (e.g. X86_64). But we run the perf test on x= 86 and here is the relative slowdown on lookup comparing to master head. I am not sure if the performance drop come= s from the atomic buitins. Keysize | single lookup | bulk lookup 4 | 0.93 | 0.95 8 | 0.95 | 0.96 16 | 0.97 | 0.96 32 | 0.97 | 1.00 48 | 1.03 | 0.99 64 | 1.04 | 0.98 9 | 0.91 | 0.96 13 | 0.97 | 0.98 37 | 1.04 | 1.03 40 | 1.02 | 0.98 Other comments inlined. >-----Original Message----- >Only race condition that can occur is - using the key store element >before the key write is completed. Hence, while inserting the element >the release memory order is used. Any other race condition is caught >by the key comparison. Memory orderings are added only where needed. >For ex: reads in the writer's context do not need memory ordering >as there is a single writer. > [Wang, Yipeng] I assume this commit works together with the next one to ena= ble read-write concurrency on relaxed memory ordering machine. This commit by i= tself does not do that, right? If this is the case, should we merge the two commits, or maybe just explain it a little bit more in the commit message? >key_idx in the bucket entry and pdata in the key store element are >used for synchronisation. key_idx is used to release an inserted >entry in the bucket to the reader. Use of pdata for synchronisation >is required due to updation of an existing entry where-in only >the pdata is updated without updating key_idx. > > >-/* Search a key from bucket and update its data */ >+/* Search a key from bucket and update its data. >+ * Writer holds the lock before calling this. >+ */ > static inline int32_t > search_and_update(const struct rte_hash *h, void *data, const void *key, > struct rte_hash_bucket *bkt, hash_sig_t sig, hash_sig_t alt_hash) >@@ -499,8 +501,13 @@ search_and_update(const struct rte_hash *h, void *dat= a, const void *key, > k =3D (struct rte_hash_key *) ((char *)keys + > bkt->key_idx[i] * h->key_entry_size); > if (rte_hash_cmp_eq(key, k->key, h) =3D=3D 0) { >- /* Update data */ >- k->pdata =3D data; >+ /* 'pdata' acts as the synchronization point >+ * when an existing hash entry is updated. >+ * Key is not updated in this case. >+ */ [Wang, Yipeng] I did not quite understand why do we need synchronization fo= r hash data update. Since pdata write is already atomic, the lookup will either read out the st= ale data or the new data, which should be fine without synchronization. Is it to ensure the order of multiple reads in lookup threads? >@@ -1243,11 +1289,19 @@ __rte_hash_lookup_bulk(const struct rte_hash *h, c= onst void **keys, > while (sec_hitmask[i]) { > uint32_t hit_index =3D __builtin_ctzl(sec_hitmask[i]); > >- uint32_t key_idx =3D secondary_bkt[i]->key_idx[hit_index]; >+ uint32_t key_idx =3D >+ __atomic_load_n( >+ &secondary_bkt[i]->key_idx[hit_index], >+ __ATOMIC_ACQUIRE); > const struct rte_hash_key *key_slot =3D > (const struct rte_hash_key *)( > (const char *)h->key_store + > key_idx * h->key_entry_size); >+ >+ if (key_idx !=3D EMPTY_SLOT) [Wang, Yipeng] I think even for current code, we need to check empty_slot. Could you export this as a bug fix commit?