All of lore.kernel.org
 help / color / mirror / Atom feed
From: Wei Liu <wei.liu2@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Xen-devel <xen-devel@lists.xenproject.org>,
	Lars Kurth <lars.kurth@citrix.com>
Subject: Re: [PATCH 00/18] x86: more bool_t to bool cleanup
Date: Mon, 3 Jul 2017 13:54:36 +0100	[thread overview]
Message-ID: <20170703125435.4n3rn6yhhg5eaze3@citrix.com> (raw)
In-Reply-To: <595A17FB0200007800167A5A@prv-mh.provo.novell.com>

On Mon, Jul 03, 2017 at 02:10:03AM -0600, Jan Beulich wrote:
> >>> On 30.06.17 at 19:01, <wei.liu2@citrix.com> wrote:
> > Seeing that bool_t keeps creeping back in new patches I think the only solution
> > is to get rid of bool_t once and for all, as soon as possible.
> 
> I don't fully agree, and considering the flood of patches you're
> submitting in this area I think I finally need to voice my opinion
> here (I had really meant to only do this in Budapest, on a BoF
> I mean to propose): I appreciate you having and taking the
> time to do this cleanup. Nevertheless I'm not overly happy with
> it. For one, it requires time (even if not very much) to review.
> And as you likely know, patch volume and review bandwidth
> don't really fit together very well. (I had managed to deal with
> all my old, non-RFC reviews during the last week, only to find
> I'm again at almost 50 again this morning, and I haven't
> finished working through the all the xen-devel traffic having
> come in over the weekend. This is simply frustrating.)

I can see two aspects in your email. I want to step back and deconstruct
it a bit.

1. The cost / benefit of doing cleanup at once vs gradually (this
   series only)

You seem to think doing tree-wide cleanup in one go is a bad thing
because it will make backporting harder (from below).

I disagree. Changes are just changes. You and Andrew have done a lot of
changes in recent releases. They will also make backporting harder, but
you definitely think it is worthy of the effort.

It now all comes down to how one calculates cost / benefit. I prefer to
do things all at once so that we don't confuse future contributors and
save the need to point out the same issues over and over again. I value
consistency a lot.

Linux kernel has done this sort of tree-wide cleanups, Xen toolstack has
done it too (either by me or external contributors). I think we could do
the same thing for the hypervisor.

Obviously if you don't agree with this approach and my value
proposition, there is no point in me pursuing this further. I will let
you and Andrew figure out what is best suited. I just need a clear
answer from you two.

2. The limited bandwidth of reviewers

You seem to think more patches is a bad thing because you don't have
enough time to review all of them. I don't think there is an easy
solution and I share your frustration. We'd better discuss this during
the summit.

> 
> It would perhaps be okay if no comments were needed at all,

This is just impossible: 1. I can't read yours and Andrew's mind; 2. I
have my own opinions in certain areas - I will try to convince you or be
convinced, but that will require discussions.

> but I think in all of the series you sent to this effect there were
> further corrections necessary (leaving aside merely desirable
> ones). Especially bulk cleanup work like this should introduce as
> little overhead to others as possible. Hence the comments here
> also apply to the PV code splitting work you've apparently
> invested quite a bit of time into.
> 

I do try to be as careful as possible with the code -- I don't think I
ever broke the hypervisor too badly, if at all, in my recent work.  Now
I've mostly figured out what you and Andrew like patch-wise. If you
think of anything that can be done better, do let me know.

And frankly I didn't mean / want to do the cleanup in the first place --
I wanted to do another thing: PV in PVH. But the code as-is is just not
in the right shape to work with. As I went along, it gradually grew into
a useful project of its own right. To be clear, this is not to blame
anyone involved in the past or now. The constraints then were different
from the ones we have now.  I've foolishly signed myself up to this big
project because I think it is worth it. :-)

> Furthermore there's the issue of backports: If cleanups like
> these are being done over time (as code is being touched
> anyway), backports (security and non-security ones) generally
> go more smoothly.
> 
> But as said, I mean to bring the overall situation up in
> Budapest, so I'm not sure how much of need should / needs
> to be discussed up front via mail.
> 

I looked at the proposed session. It only covers the second aspect, so I
wrote this reply to at least make my first point across.

> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-07-03 12:54 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-30 17:01 [PATCH 00/18] x86: more bool_t to bool cleanup Wei Liu
2017-06-30 17:01 ` [PATCH 01/18] x86/acpi: use plain bool Wei Liu
2017-06-30 17:01 ` [PATCH 02/18] x86/apic.c: " Wei Liu
2017-06-30 17:26   ` Andrew Cooper
2017-06-30 17:01 ` [PATCH 03/18] x86/debug.c: " Wei Liu
2017-06-30 17:28   ` Andrew Cooper
2017-06-30 17:01 ` [PATCH 04/18] x86/dmi.c: " Wei Liu
2017-06-30 17:01 ` [PATCH 05/18] x86/domctl: " Wei Liu
2017-06-30 17:01 ` [PATCH 06/18] x86/hpet.c: " Wei Liu
2017-06-30 17:01 ` [PATCH 07/18] x86/e820.c: use plan bool Wei Liu
2017-06-30 17:01 ` [PATCH 08/18] x86/i387.c: use plain bool Wei Liu
2017-06-30 17:01 ` [PATCH 09/18] x86/i8259.c: " Wei Liu
2017-06-30 17:01 ` [PATCH 10/18] x86/monitor.c: " Wei Liu
2017-06-30 17:45   ` Andrew Cooper
2017-06-30 18:30     ` Razvan Cojocaru
2017-06-30 17:01 ` [PATCH 11/18] x86/xstate.c: " Wei Liu
2017-06-30 17:01 ` [PATCH 12/18] x86/srat.c: " Wei Liu
2017-06-30 17:01 ` [PATCH 13/18] x86/smpboot.c: " Wei Liu
2017-06-30 17:01 ` [PATCH 14/18] x86/io_apic.c: " Wei Liu
2017-06-30 17:53   ` Andrew Cooper
2017-06-30 17:01 ` [PATCH 15/18] x86/mpparse.c: " Wei Liu
2017-06-30 17:54   ` Andrew Cooper
2017-06-30 17:01 ` [PATCH 16/18] x86/numa.c: " Wei Liu
2017-06-30 17:01 ` [PATCH 17/18] x86/msi.c: " Wei Liu
2017-06-30 17:01 ` [PATCH 18/18] x86/psr.c: " Wei Liu
2017-06-30 18:00 ` [PATCH 00/18] x86: more bool_t to bool cleanup Andrew Cooper
2017-07-03  8:10 ` Jan Beulich
2017-07-03 12:54   ` Wei Liu [this message]
2017-07-03 13:19     ` Jan Beulich
2017-07-04 10:43       ` Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170703125435.4n3rn6yhhg5eaze3@citrix.com \
    --to=wei.liu2@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=lars.kurth@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.