From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753803AbdLMQv6 (ORCPT ); Wed, 13 Dec 2017 11:51:58 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:43926 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753697AbdLMQvv (ORCPT ); Wed, 13 Dec 2017 11:51:51 -0500 Date: Wed, 13 Dec 2017 17:51:55 +0100 From: Greg Kroah-Hartman To: Boris Brezillon Cc: Wolfram Sang , linux-i2c@vger.kernel.org, Jonathan Corbet , linux-doc@vger.kernel.org, Arnd Bergmann , Przemyslaw Sroka , Arkadiusz Golec , Alan Douglas , Bartosz Folta , Damian Kos , Alicja Jurasik-Urbaniak , Jan Kotas , Cyprian Wronka , Alexandre Belloni , Thomas Petazzoni , Nishanth Menon , Rob Herring , Pawel Moll , Mark Rutland , Ian Campbell , Kumar Gala , devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [RFC 2/5] i3c: Add core I3C infrastructure Message-ID: <20171213165155.GA6003@kroah.com> References: <1501518290-5723-1-git-send-email-boris.brezillon@free-electrons.com> <1501518290-5723-3-git-send-email-boris.brezillon@free-electrons.com> <20170801014021.GA20004@kroah.com> <20170801124801.38bbc431@bbrezillon> <20170801175133.GA30520@kroah.com> <20170801233001.4b08834f@bbrezillon> <20170802021327.GB23033@kroah.com> <20171213172043.24e4d4bc@bbrezillon> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20171213172043.24e4d4bc@bbrezillon> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Dec 13, 2017 at 05:20:43PM +0100, Boris Brezillon wrote: > Hi Greg, > > On Tue, 1 Aug 2017 19:13:27 -0700 > Greg Kroah-Hartman wrote: > > > > > > Unless you see a good reason to not use a R/W lock, I'd like to keep it > > > > > this way because master IPs are likely to implement advanced queuing > > > > > mechanism (allows one to queue new transfers even if the master is > > > > > already busy processing other requests), and serializing things at the > > > > > framework level will just prevent us from using this kind of > > > > > optimization. > > > > > > > > Unless you can prove otherwise, using a rw lock is almost always worse > > > > than just a mutex. > > > > > > Is it still true when it's taken in non-exclusive mode most of the > > > time, and the time you spend in the critical section is non-negligible? > > > > > > I won't pretend I know better than you do what is preferable, it's just > > > that the RW lock seemed appropriate to me for the situation I tried to > > > described here. > > > > Again, measure it. If you can't measure it, then don't use it. Use a > > simple lock instead. Seriously, don't make it more complex until you > > really have to. It sounds like you didn't measure it at all, which > > isn't good, please do so. > > > > I'm resurrecting this thread because I finally had the time to implement > message queuing in Cadence I3C master driver. So I did a test with 2 > I3C devices on the bus, and their drivers sending as much SDR messages > as they can in 10s. Here are the results: > > | mutex | rwsem | > --------------------------------------- > dev1 | 19087 | 29532 | > dev2 | 19341 | 29118 | > ======================================= > total | 38428 | 58650 | > msg/sec | ~3843 | ~5865 | > > > The results I'm obtaining here are not so surprising since all normal > transfers are taking the lock in read mode, so there's no contention. > I didn't measure the impact on performances when there's one > maintenance operation taking the lock in write mode and several normal > transfers waiting for this lock, but really, maintenance operations are > infrequent, and that's not where performance matters in our use case. > > I also did the same test with only one device doing transfers on the > bus, and this time the mutex wins, but there's not a huge difference. > > | mutex | rwsem | > --------------------------------------- > total | 67116 | 66561 | > msg/sec | ~6712 | ~6656 | > > Let me know if you want more information on the test procedure. Nice, thanks for testing, so it is a real win here, good! greg k-h From mboxrd@z Thu Jan 1 00:00:00 1970 From: Greg Kroah-Hartman Subject: Re: [RFC 2/5] i3c: Add core I3C infrastructure Date: Wed, 13 Dec 2017 17:51:55 +0100 Message-ID: <20171213165155.GA6003@kroah.com> References: <1501518290-5723-1-git-send-email-boris.brezillon@free-electrons.com> <1501518290-5723-3-git-send-email-boris.brezillon@free-electrons.com> <20170801014021.GA20004@kroah.com> <20170801124801.38bbc431@bbrezillon> <20170801175133.GA30520@kroah.com> <20170801233001.4b08834f@bbrezillon> <20170802021327.GB23033@kroah.com> <20171213172043.24e4d4bc@bbrezillon> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: <20171213172043.24e4d4bc@bbrezillon> Sender: devicetree-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org To: Boris Brezillon Cc: Wolfram Sang , linux-i2c-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Jonathan Corbet , linux-doc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Arnd Bergmann , Przemyslaw Sroka , Arkadiusz Golec , Alan Douglas , Bartosz Folta , Damian Kos , Alicja Jurasik-Urbaniak , Jan Kotas , Cyprian Wronka , Alexandre Belloni , Thomas Petazzoni , Nishanth Menon , Rob Herring , Pawel Moll , Mark Rutland , Ian Campbell , Kumar Gala List-Id: devicetree@vger.kernel.org On Wed, Dec 13, 2017 at 05:20:43PM +0100, Boris Brezillon wrote: > Hi Greg, > > On Tue, 1 Aug 2017 19:13:27 -0700 > Greg Kroah-Hartman wrote: > > > > > > Unless you see a good reason to not use a R/W lock, I'd like to keep it > > > > > this way because master IPs are likely to implement advanced queuing > > > > > mechanism (allows one to queue new transfers even if the master is > > > > > already busy processing other requests), and serializing things at the > > > > > framework level will just prevent us from using this kind of > > > > > optimization. > > > > > > > > Unless you can prove otherwise, using a rw lock is almost always worse > > > > than just a mutex. > > > > > > Is it still true when it's taken in non-exclusive mode most of the > > > time, and the time you spend in the critical section is non-negligible? > > > > > > I won't pretend I know better than you do what is preferable, it's just > > > that the RW lock seemed appropriate to me for the situation I tried to > > > described here. > > > > Again, measure it. If you can't measure it, then don't use it. Use a > > simple lock instead. Seriously, don't make it more complex until you > > really have to. It sounds like you didn't measure it at all, which > > isn't good, please do so. > > > > I'm resurrecting this thread because I finally had the time to implement > message queuing in Cadence I3C master driver. So I did a test with 2 > I3C devices on the bus, and their drivers sending as much SDR messages > as they can in 10s. Here are the results: > > | mutex | rwsem | > --------------------------------------- > dev1 | 19087 | 29532 | > dev2 | 19341 | 29118 | > ======================================= > total | 38428 | 58650 | > msg/sec | ~3843 | ~5865 | > > > The results I'm obtaining here are not so surprising since all normal > transfers are taking the lock in read mode, so there's no contention. > I didn't measure the impact on performances when there's one > maintenance operation taking the lock in write mode and several normal > transfers waiting for this lock, but really, maintenance operations are > infrequent, and that's not where performance matters in our use case. > > I also did the same test with only one device doing transfers on the > bus, and this time the mutex wins, but there's not a huge difference. > > | mutex | rwsem | > --------------------------------------- > total | 67116 | 66561 | > msg/sec | ~6712 | ~6656 | > > Let me know if you want more information on the test procedure. Nice, thanks for testing, so it is a real win here, good! greg k-h -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html