From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jerin Jacob Subject: Re: [PATCH v3 2/3] ring: synchronize the load and store of the tail Date: Sat, 29 Sep 2018 16:27:12 +0530 Message-ID: <20180929105651.GB30457@jerin> References: <20180807031943.5331-1-gavin.hu@arm.com> <1537172244-64874-1-git-send-email-gavin.hu@arm.com> <1537172244-64874-2-git-send-email-gavin.hu@arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: dev@dpdk.org, Honnappa.Nagarahalli@arm.com, steve.capper@arm.com, Ola.Liljedahl@arm.com, nd@arm.com, stable@dpdk.org To: Gavin Hu Return-path: Content-Disposition: inline In-Reply-To: <1537172244-64874-2-git-send-email-gavin.hu@arm.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" -----Original Message----- > Date: Mon, 17 Sep 2018 16:17:23 +0800 > From: Gavin Hu > To: dev@dpdk.org > CC: gavin.hu@arm.com, Honnappa.Nagarahalli@arm.com, steve.capper@arm.com, > Ola.Liljedahl@arm.com, jerin.jacob@caviumnetworks.com, nd@arm.com, > stable@dpdk.org > Subject: [PATCH v3 2/3] ring: synchronize the load and store of the tail > X-Mailer: git-send-email 2.7.4 > > > Synchronize the load-acquire of the tail and the store-release > within update_tail, the store release ensures all the ring operations, > enqueue or dequeue, are seen by the observers on the other side as soon > as they see the updated tail. The load-acquire is needed here as the > data dependency is not a reliable way for ordering as the compiler might > break it by saving to temporary values to boost performance. > When computing the free_entries and avail_entries, use atomic semantics > to load the heads and tails instead. > > The patch was benchmarked with test/ring_perf_autotest and it decreases > the enqueue/dequeue latency by 5% ~ 27.6% with two lcores, the real gains > are dependent on the number of lcores, depth of the ring, SPSC or MPMC. > For 1 lcore, it also improves a little, about 3 ~ 4%. > It is a big improvement, in case of MPMC, with two lcores and ring size > of 32, it saves latency up to (3.26-2.36)/3.26 = 27.6%. > > This patch is a bug fix, while the improvement is a bonus. In our analysis > the improvement comes from the cacheline pre-filling after hoisting load- > acquire from _atomic_compare_exchange_n up above. > > The test command: > $sudo ./test/test/test -l 16-19,44-47,72-75,100-103 -n 4 --socket-mem=\ > 1024 -- -i > > Test result with this patch(two cores): > SP/SC bulk enq/dequeue (size: 8): 5.86 > MP/MC bulk enq/dequeue (size: 8): 10.15 > SP/SC bulk enq/dequeue (size: 32): 1.94 > MP/MC bulk enq/dequeue (size: 32): 2.36 > > In comparison of the test result without this patch: > SP/SC bulk enq/dequeue (size: 8): 6.67 > MP/MC bulk enq/dequeue (size: 8): 13.12 > SP/SC bulk enq/dequeue (size: 32): 2.04 > MP/MC bulk enq/dequeue (size: 32): 3.26 > > Fixes: 39368ebfc6 ("ring: introduce C11 memory model barrier option") > Cc: stable@dpdk.org > > Signed-off-by: Gavin Hu > Reviewed-by: Honnappa Nagarahalli > Reviewed-by: Steve Capper > Reviewed-by: Ola Liljedahl Tested with ThunderX2 server platform. Though it has minor performance impact on non burst variants. For Burst variant, I could see similar performance improvement and it is C11 semantically correct too. Acked-by: Jerin Jacob Tested-by: Jerin Jacob