All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
To: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>,
	media-submaintainers@linuxtv.org,
	ksummit-discuss@lists.linuxfoundation.org
Subject: Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
Date: Mon, 17 Jun 2019 08:03:15 -0300	[thread overview]
Message-ID: <20190617080315.4d8fd076@coco.lan> (raw)
In-Reply-To: <20190615110107.GA5974@pendragon.ideasonboard.com>

Em Sat, 15 Jun 2019 14:01:07 +0300
Laurent Pinchart <laurent.pinchart@ideasonboard.com> escreveu:

> Hi Mauro,
> 
> On Fri, Jun 14, 2019 at 12:11:37PM -0300, Mauro Carvalho Chehab wrote:
> > Em Fri, 14 Jun 2019 15:58:07 +0200 Greg KH escreveu:  
> > > On Fri, Jun 14, 2019 at 10:24:24AM -0300, Mauro Carvalho Chehab wrote:  
> > >> Em Fri, 14 Jun 2019 13:12:22 +0300 Laurent Pinchart escreveu:  
> > >>> On Thu, Jun 13, 2019 at 10:59:16AM -0300, Mauro Carvalho Chehab wrote:    
> > >>>> Em Thu, 06 Jun 2019 19:24:35 +0300 James Bottomley escreveu:
> > >>>>       
> > >>>>> [splitting issues to shorten replies]
> > >>>>> On Thu, 2019-06-06 at 17:58 +0200, Greg KH wrote:      
> > >>>>>> On Thu, Jun 06, 2019 at 06:48:36PM +0300, James Bottomley wrote:        
> > >>>>>>> This is probably best done as two separate topics
> > >>>>>>> 
> > >>>>>>> 1) Pull network: The pull depth is effectively how many pulls your
> > >>>>>>> tree does before it goes to Linus, so pull depth 0 is sent straight
> > >>>>>>> to Linus, pull depth 1 is sent to a maintainer who sends to Linus
> > >>>>>>> and so on.  We've previously spent time discussing how increasing
> > >>>>>>> the pull depth of the network would reduce the amount of time Linus
> > >>>>>>> spends handling pull requests.  However, in the areas I play, like
> > >>>>>>> security, we seem to be moving in the opposite direction
> > >>>>>>> (encouraging people to go from pull depth 1 to pull depth 0).  If
> > >>>>>>> we're deciding to move to a flat tree model, where everything is
> > >>>>>>> depth 0, that's fine, I just think we could do with making a formal
> > >>>>>>> decision on it so we don't waste energy encouraging greater tree
> > >>>>>>> depth.        
> > >>>>>> 
> > >>>>>> That depth "change" was due to the perceived problems that having a
> > >>>>>> deeper pull depth was causing.  To sort that out, Linus asked for
> > >>>>>> things to go directly to him.        
> > >>>>> 
> > >>>>> This seems to go beyond problems with one tree and is becoming a trend.
> > >>>>>       
> > >>>>>> It seems like the real issue is the problem with that subsystem
> > >>>>>> collection point, and the fact that the depth changed is a sign that
> > >>>>>> our model works well (i.e. everyone can be routed around.)        
> > >>>>> 
> > >>>>> I'm not really interested in calling out "problem" maintainers, or
> > >>>>> indeed having another "my patch collection method is better than yours"
> > >>>>> type discussion.  What I was fishing for is whether the general
> > >>>>> impression that greater tree depth is worth striving for is actually
> > >>>>> correct, or we should all give up now and simply accept that the
> > >>>>> current flat tree is the best we can do, and, indeed is the model that
> > >>>>> works best for Linus.  I get the impression this may be the case, but I
> > >>>>> think making sure by having an actual discussion among the interested
> > >>>>> parties who will be at the kernel summit, would be useful.      
> > >>>> 
> > >>>> On media, we came from a "depth 1" model, moving toward a "depth 2" level: 
> > >>>> 
> > >>>> patch author -> media/driver maintainer -> subsystem maintainer -> Linus      
> > >>> 
> > >>> I'd like to use this opportunity to ask again for pull requests to be
> > >>> pulled instead of cherry-picked.    
> > >> 
> > >> There are other forums for discussing internal media maintainership,
> > >> like the weekly meetings we have and our own mailing lists.    
> > > 
> > > You all have weekly meetings?  That's crazy...  
> > 
> > Yep, every week we do a meeting, usually taking about 1 hour via irc,
> > on this channel:
> > 
> > 	https://linuxtv.org/irc/irclogger_logs//media-maint
> >   
> > > Anyway, I'll reiterate Laurent here, keeping things as a pull instead of
> > > cherry-picking does make things a lot easier for contributors.  I know
> > > I'm guilty of it as well as a maintainer, but that's only until I start
> > > trusting the submitter.  Once that happens, pulling is _much_ easier as
> > > a maintainer instead of individual patches for the usual reason that
> > > linux-next has already verified that the sub-tree works properly before
> > > I merge it in.
> > > 
> > > Try it, it might make your load be reduced, it has for me.  
> > 
> > If you think this is relevant to a broader audience, let me reply with
> > a long answer about that. I prepared it and intended to reply to our
> > internal media maintainer's ML (added as c/c). 
> > 
> > Yet, I still think that this is media maintainer's dirty laundry
> > and should be discussed elsewhere ;-)  
> 
> I'll do my best to reply below with comments that are not too specific
> to the media subsystem, hoping it will be useful for a wider audience
> :-)
> 
> > ---
> > 
> > Laurent,
> > 
> > I already explained a few times, including during the last Media Summit,
> > but it seems you missed the point.
> > 
> > As shown on our stats:
> > 	https://linuxtv.org/patchwork_stats.php
> > 
> > We're receiving about 400 to 1000 patches per month, meaning 18 to 45
> > patches per working days (22 days/month). From those, we accept about
> > 100 to 300 patches per month (4.5 to 13.6 patches per day).
> > 
> > Currently, I review all accepted patches.  
> 
> As other have said or hinted, this is where things start getting wrong.
> As a maintainer your duty isn't to work for 24h a day and review every
> single patch. The duty of a maintainer is to help the subsystem stay
> healthy and move forward. This can involve lots of technical work, but
> it doesn't have to, that can also be delegated (providing, of course,
> that the subsysteù would have technically competent and reliable
> contributors who would be willing to help there). In my opinion
> maintaining a subsystem is partly a technical job, and partly a social
> job. Being excellent at both is the icing on the cake, not a minimal
> requirement.

There are a couple of reasons why I keep doing that. Among them:

1) I'd like to follow what's happening at the subsystem. Reviewing the
patches allow me to have at least a rough idea about what's happening,
with makes easier when we need to discuss about possible changes at
the core;

2) An additional reviewer improves code quality. One of the feedbacks
I get from sub-maintainers is that we need more core review. So, I'm
doing my part.

3) I like my work.

> 
> > I have bandwidth to review 4.5 to 13.6 patches per day, not without a lot
> > of personal efforts. For that, I use part of my spare time, as I have other
> > duties, plus I develop patches myself. So, in order to be able to handle
> > those, I typically work almost non-stop starting at 6am and sometimes
> > going up to 10pm. Also, when there are too much stuff pending (like on
> > busy months), I handle patches also during weekends.  
> 
> I wasn't aware of your personal work schedule, and I'm sorry to hear
> it's so extreme. This is not sustainable, and I think this clearly shows
> that a purely flat tree model with a single maintainer has difficulty
> scaling for large subsystems. If anything, this calls in my opinion for
> increasing the pull network depth to make your job bearable again.

It has been sustainable. I've doing it over the last 10 years.

It is not every day I go from 6am to 10pm. Also, it is not that I don't have
a social life. I still have time for my hobbies and for my family.

> 
> > However, 45 patches/day (225 patches per week) is a lot for me to
> > review. I can't commit to handle such amount of patches.
> > 
> > That's why I review patches after a first review from the other
> > media maintainers. The way I identify the patches I should review is
> > when I receive pull requests.
> > 
> > We could do a different workflow. For example, once a media maintainer
> > review a patch, it could be delegated to me at patchwork. This would likely 
> > increase the time for merging stuff, as the workflow would change from:
> > 
> >  +-------+    +------------------+    +---------------+
> >  | patch | -> | media maintainer | -> | submaintainer | 
> >  +-------+    +------------------+    +---------------+
> > 
> > to: 
> > 
> >  +-------+    +------------------+    +---------------+    +------------------+    +---------------+
> >  | patch | -> | media maintainer | -> | submaintainer | -> | media maintainer | -> | submaintainer | 
> >  +-------+    +------------------+    +---------------+    +------------------+    +---------------+
> > 
> >   \------------------------v--------------------------/    \---------------------v------------------/
> > 			Patchwork                                           Pull Request
> > 
> > The pull request part of the new chain could eventually be (semi-)automated
> > by some scripting that would just do a checksum sum at the received patches 
> > that were previously reviewed by me. If matches, and if it passes on the 
> > usual checks I run for PR patches, it would push on some tree. Still, it 
> > would take more time than the previous flow.  
> 
> I'm sorry, but I don't think this goes in the right direction. With the
> number of patches increasing, and the number of hours in a maintainer's
> day desperately not agreeing to increase above 24, the only scalable
> solution I see is to stop reviewing every single patch that is accepted
> in the subsystem tree, through delegation/sharing of maintainer's
> duties, and trust. I know it can be difficult to let go of a driver one
> has authored and let it live its life, so I can only guess the
> psychological effect is much worse for a whole subsystem. I've authored
> drivers that I cared and still care about, and I need to constantly
> remind me that too much love can lead to suffocating. The most loving
> parent has to accept that their children will one day leave home, but
> that it doesn't mean their lives will part forever. I think the same
> applies to free software.

That's not the point. The point is that, while I have time for doing
patch reviews, I want to keep doing it.

Also, as I said, this is media dirty laundering: weather I would keep
doing patch reviews or not - and how this will work - is something for
our internal discussions, and not for KS.

> 
> > Also, as also discussed during the media summit, in order to have such
> > kind of automation, we would need to improve our infrastructure, moving
> > the tests from a noisy/heated server I have over my desk to some VM
> > inside the cloud, once we get funds for it.  
> 
> Sure, and I think this is a topic that would gain from being discussed
> with a wider audience. The media subsystem isn't the only one to be
> large enough that it would benefit a lot from automation (I would even
> argue that all subsystems could benefit from that), so sharing
> experiences, and hearing other subsystem's wishes, would be useful here.

Maybe.

Are there any other subsystem currently working to get funding for
hosting/automation?

> 
> > In any case, a discussion that affects the patch latency and our internal
> > procedures within the media subsystem is something that should be discussed
> > with other media mantainers, and not at KS.  
> 
> Isn't improving patch latency something that would be welcome throughout
> the kernel ?

Your proposed change won't improve it. It will either keep the same,
if we keep the current flow:

	  +-------+    +------------------+    +---------------+
	  | patch | -> | media maintainer | -> | submaintainer | 
	  +-------+    +------------------+    +---------------+

(either if I review the patch or not, the flow will be the same - and
so the patch latency)

Or make it higher, if we change it to:

  +-------+    +------------------+    +---------------+    +------------------+    +---------------+
  | patch | -> | media maintainer | -> | submaintainer | -> | media maintainer | -> | submaintainer | 
  +-------+    +------------------+    +---------------+    +------------------+    +---------------+
 
   \------------------------v--------------------------/    \---------------------v------------------/
 			Patchwork                                           Pull Request

More below:

> 
> > -
> > 
> > That's said, one day I may not be able to review all accepted patches.
> > When this day comes, I'll just apply the pull requests I receive.
> > 
> > -
> > 
> > Finally, if you're so interested on improving our maintenance model,
> > I beg you: please handle the patches delegated to you:
> > 
> > 	https://patchwork.linuxtv.org/project/linux-media/list/?series=&submitter=&state=&q=&archive=&delegate=2510
> > 
> > As we agreed on our media meetings, I handled about ~60 patches that 
> > were waiting for your review since 2017 a couple of weeks ago - 
> > basically the ones that are not touching the drivers you currently
> > maintain, but there are still 23 patches sent between 2013-2018
> > over there, plus the 48 patches sent in 2019.  

Reviewing the patches on your queue or delegating them to others is
actually a concrete thing you can do in order to reduce the patch
handling latency.

Thanks,
Mauro

  reply	other threads:[~2019-06-17 11:03 UTC|newest]

Thread overview: 77+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-06 15:48 [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency James Bottomley
2019-06-06 15:58 ` Greg KH
2019-06-06 16:24   ` James Bottomley
2019-06-13 13:59     ` Mauro Carvalho Chehab
2019-06-14 10:12       ` Laurent Pinchart
2019-06-14 13:24         ` Mauro Carvalho Chehab
2019-06-14 13:31           ` Laurent Pinchart
2019-06-14 13:54             ` Mauro Carvalho Chehab
2019-06-14 14:08               ` Laurent Pinchart
2019-06-14 14:56             ` Mark Brown
2019-06-14 13:58           ` Greg KH
2019-06-14 15:11             ` Mauro Carvalho Chehab
2019-06-14 15:23               ` James Bottomley
2019-06-14 15:43                 ` Mauro Carvalho Chehab
2019-06-14 15:49                   ` James Bottomley
2019-06-14 16:04                     ` Mauro Carvalho Chehab
2019-06-14 16:16                       ` James Bottomley
2019-06-14 17:48                         ` Mauro Carvalho Chehab
2019-06-17  7:01                           ` Geert Uytterhoeven
2019-06-17 13:31                             ` Mauro Carvalho Chehab
2019-06-17 14:26                               ` Takashi Iwai
2019-06-19  7:53                               ` Dan Carpenter
2019-06-19  8:13                                 ` [Ksummit-discuss] [kbuild] " Philip Li
2019-06-19  8:33                                 ` [Ksummit-discuss] " Daniel Vetter
2019-06-19 14:39                                   ` Mauro Carvalho Chehab
2019-06-19 14:48                                     ` [Ksummit-discuss] [media-submaintainers] " Laurent Pinchart
2019-06-19 15:19                                       ` Mauro Carvalho Chehab
2019-06-19 15:46                                       ` James Bottomley
2019-06-19 16:23                                         ` Mark Brown
2019-06-20 12:24                                           ` Geert Uytterhoeven
2019-06-20 10:36                                         ` Jani Nikula
2019-06-19 15:56                                       ` Mark Brown
2019-06-19 16:09                                         ` Laurent Pinchart
2019-06-15 10:55                         ` [Ksummit-discuss] " Daniel Vetter
2019-06-14 20:52               ` Vlastimil Babka
2019-06-15 11:01               ` Laurent Pinchart
2019-06-17 11:03                 ` Mauro Carvalho Chehab [this message]
2019-06-17 12:28                   ` Mark Brown
2019-06-17 16:48                     ` Tim.Bird
2019-06-17 17:23                       ` Geert Uytterhoeven
2019-06-17 23:13                       ` Mauro Carvalho Chehab
2019-06-17 14:18                   ` Laurent Pinchart
2019-06-06 16:29   ` James Bottomley
2019-06-06 18:26     ` Dan Williams
2019-06-07 20:14       ` Martin K. Petersen
2019-06-13 13:49         ` Mauro Carvalho Chehab
2019-06-13 14:35           ` James Bottomley
2019-06-13 15:03             ` Martin K. Petersen
2019-06-13 15:21               ` Bart Van Assche
2019-06-13 15:27                 ` James Bottomley
2019-06-13 15:35                 ` Guenter Roeck
2019-06-13 15:39                   ` Bart Van Assche
2019-06-14 11:53                     ` Leon Romanovsky
2019-06-14 17:06                       ` Bart Van Assche
2019-06-15  7:20                         ` Leon Romanovsky
2019-06-13 15:39                   ` James Bottomley
2019-06-13 15:42                   ` Takashi Iwai
2019-06-13 19:28               ` James Bottomley
2019-06-14  9:08               ` Dan Carpenter
2019-06-14  9:43               ` Dan Carpenter
2019-06-14 13:27               ` Dan Carpenter
2019-06-13 17:27             ` Mauro Carvalho Chehab
2019-06-13 18:41               ` James Bottomley
2019-06-13 19:11                 ` Mauro Carvalho Chehab
2019-06-13 19:20                   ` Joe Perches
2019-06-14  2:21                     ` Mauro Carvalho Chehab
2019-06-13 19:57                   ` Martin K. Petersen
2019-06-13 14:53           ` Martin K. Petersen
2019-06-13 17:09             ` Mauro Carvalho Chehab
2019-06-14  3:03               ` Martin K. Petersen
2019-06-14  3:35                 ` Mauro Carvalho Chehab
2019-06-14  7:31                 ` Joe Perches
2019-06-13 13:28       ` Mauro Carvalho Chehab
2019-06-06 16:18 ` Bart Van Assche
2019-06-14 19:53 ` Bjorn Helgaas
2019-06-14 23:21   ` Bjorn Helgaas
2019-06-17 10:35     ` Mauro Carvalho Chehab

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190617080315.4d8fd076@coco.lan \
    --to=mchehab+samsung@kernel.org \
    --cc=James.Bottomley@hansenpartnership.com \
    --cc=ksummit-discuss@lists.linuxfoundation.org \
    --cc=laurent.pinchart@ideasonboard.com \
    --cc=media-submaintainers@linuxtv.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.