From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752303AbcFFDPS (ORCPT ); Sun, 5 Jun 2016 23:15:18 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:14020 "EHLO mx0b-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750763AbcFFDPP (ORCPT ); Sun, 5 Jun 2016 23:15:15 -0400 Message-Id: <201606060315.u563Dsxf006950@mx0a-001b2d01.pphosted.com> X-IBM-Helo: d28dlp01.in.ibm.com X-IBM-MailFrom: xinhui.pan@linux.vnet.ibm.com X-IBM-RcptTo: linux-arch@vger.kernel.org;linux-kernel@vger.kernel.org Date: Mon, 06 Jun 2016 11:15:01 +0800 From: xinhui User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.8.0 MIME-Version: 1.0 To: Waiman Long CC: Peter Zijlstra , Arnd Bergmann , linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, waiman.long@hp.com Subject: Re: [PATCH] locking/qrwlock: fix write unlock issue in big endian References: <1464862148-5672-1-git-send-email-xinhui.pan@linux.vnet.ibm.com> <4399273.0kije2Qdx5@wuerfel> <20160602110200.GZ3190@twins.programming.kicks-ass.net> <201606030718.u537FQg0009963@mx0a-001b2d01.pphosted.com> <5751EF53.3080205@hpe.com> In-Reply-To: <5751EF53.3080205@hpe.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16060603-4789-0000-0000-000002BF157C X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16060603-4790-0000-0000-000010302E29 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-06-06_02:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=8 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1604210000 definitions=main-1606060038 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2016年06月04日 04:57, Waiman Long wrote: > On 06/03/2016 03:17 AM, xinhui wrote: >> >> On 2016年06月02日 19:02, Peter Zijlstra wrote: >>> On Thu, Jun 02, 2016 at 12:44:51PM +0200, Arnd Bergmann wrote: >>>> On Thursday, June 2, 2016 6:09:08 PM CEST Pan Xinhui wrote: >>>>> diff --git a/include/asm-generic/qrwlock.h b/include/asm-generic/qrwlock.h >>>>> index 54a8e65..eadd7a3 100644 >>>>> --- a/include/asm-generic/qrwlock.h >>>>> +++ b/include/asm-generic/qrwlock.h >>>>> @@ -139,7 +139,7 @@ static inline void queued_read_unlock(struct qrwlock *lock) >>>>> */ >>>>> static inline void queued_write_unlock(struct qrwlock *lock) >>>>> { >>>>> - smp_store_release((u8 *)&lock->cnts, 0); >>>>> + (void)atomic_sub_return_release(_QW_LOCKED, &lock->cnts); >>>>> } >>>> >>>> Isn't this more expensive than the existing version? >>> >>> Yes, loads. And while this might be a suitable fix for asm-generic, it >>> will introduce a fairly large regression on x86 (which is currently the >>> only user of this). >>> >> well, to show respect to struct __qrwlock private field. >> We can keep smp_store_release((u8 *)&lock->cnts, 0) in little_endian machine. >> as this should be quick and no performance issue to all other archs(although there is only 1 now) >> >> BUT, We need use (void)atomic_sub_return_release(_QW_LOCKED, &lock->cnts) in big_endian machine. >> because it's bad to export struct __qrwlock and set its private field to NULL. >> >> How about code like below. >> >> static inline void queued_write_unlock(struct qrwlock *lock) >> { >> #ifdef __BIG_ENDIAN >> (void)atomic_sub_return_release(_QW_LOCKED, &lock->cnts); >> #else >> smp_store_release((u8 *)&lock->cnts, 0); >> #endif >> } >> >> BUT I think that would make thing a little complex to understand. :( >> So at last, in my opinion, I suggest my patch :) >> any thoughts? > > Another alternative is to make queued_write_unlock() overrideable from asm/qrwlock.h, just like what we did with queued_spin_unlock(). > fair enough :) And archs can write better code for themself. I will send patch v2 with suggested-by of you. :) thanks xinhui > Cheers, > Longman >