All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: Paul Durrant <Paul.Durrant@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	"qemu-block@nongnu.org" <qemu-block@nongnu.org>,
	Tim Smith <tim.smith@citrix.com>,
	'Markus Armbruster' <armbru@redhat.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [Qemu-devel] xen_disk qdevification
Date: Fri, 9 Nov 2018 11:40:06 +0100	[thread overview]
Message-ID: <20181109104006.GC4635__3708.19789027355$1541759939$gmane$org@dhcp-200-186.str.redhat.com> (raw)
In-Reply-To: <9c7dc67940a6422aba673afaab1734ca@AMSPEX02CL03.citrite.net>

Am 09.11.2018 um 11:27 hat Paul Durrant geschrieben:
> > -----Original Message-----
> > From: Paul Durrant
> > Sent: 08 November 2018 16:44
> > To: Paul Durrant <Paul.Durrant@citrix.com>; 'Kevin Wolf'
> > <kwolf@redhat.com>
> > Cc: Stefano Stabellini <sstabellini@kernel.org>; qemu-block@nongnu.org;
> > Tim Smith <tim.smith@citrix.com>; qemu-devel@nongnu.org; 'Markus
> > Armbruster' <armbru@redhat.com>; Anthony Perard
> > <anthony.perard@citrix.com>; xen-devel@lists.xenproject.org; Max Reitz
> > <mreitz@redhat.com>
> > Subject: RE: [Qemu-devel] xen_disk qdevification
> > 
> > > -----Original Message-----
> > > From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On
> > Behalf
> > > Of Paul Durrant
> > > Sent: 08 November 2018 15:44
> > > To: 'Kevin Wolf' <kwolf@redhat.com>
> > > Cc: Stefano Stabellini <sstabellini@kernel.org>; qemu-block@nongnu.org;
> > > Tim Smith <tim.smith@citrix.com>; qemu-devel@nongnu.org; 'Markus
> > > Armbruster' <armbru@redhat.com>; Anthony Perard
> > > <anthony.perard@citrix.com>; xen-devel@lists.xenproject.org; Max Reitz
> > > <mreitz@redhat.com>
> > > Subject: Re: [Xen-devel] [Qemu-devel] xen_disk qdevification
> > >
> > > > -----Original Message-----
> > > > From: Kevin Wolf [mailto:kwolf@redhat.com]
> > > > Sent: 08 November 2018 15:21
> > > > To: Paul Durrant <Paul.Durrant@citrix.com>
> > > > Cc: 'Markus Armbruster' <armbru@redhat.com>; Anthony Perard
> > > > <anthony.perard@citrix.com>; Tim Smith <tim.smith@citrix.com>; Stefano
> > > > Stabellini <sstabellini@kernel.org>; qemu-block@nongnu.org; qemu-
> > > > devel@nongnu.org; Max Reitz <mreitz@redhat.com>; xen-
> > > > devel@lists.xenproject.org
> > > > Subject: Re: [Qemu-devel] xen_disk qdevification
> > > >
> > > > Am 08.11.2018 um 15:00 hat Paul Durrant geschrieben:
> > > > > > -----Original Message-----
> > > > > > From: Markus Armbruster [mailto:armbru@redhat.com]
> > > > > > Sent: 05 November 2018 15:58
> > > > > > To: Paul Durrant <Paul.Durrant@citrix.com>
> > > > > > Cc: 'Kevin Wolf' <kwolf@redhat.com>; Tim Smith
> > > <tim.smith@citrix.com>;
> > > > > > Stefano Stabellini <sstabellini@kernel.org>; qemu-
> > block@nongnu.org;
> > > > qemu-
> > > > > > devel@nongnu.org; Max Reitz <mreitz@redhat.com>; Anthony Perard
> > > > > > <anthony.perard@citrix.com>; xen-devel@lists.xenproject.org
> > > > > > Subject: Re: [Qemu-devel] xen_disk qdevification
> > > > > >
> > > > > > Paul Durrant <Paul.Durrant@citrix.com> writes:
> > > > > >
> > > > > > >> -----Original Message-----
> > > > > > >> From: Kevin Wolf [mailto:kwolf@redhat.com]
> > > > > > >> Sent: 02 November 2018 11:04
> > > > > > >> To: Tim Smith <tim.smith@citrix.com>
> > > > > > >> Cc: xen-devel@lists.xenproject.org; qemu-devel@nongnu.org;
> > qemu-
> > > > > > >> block@nongnu.org; Anthony Perard <anthony.perard@citrix.com>;
> > > Paul
> > > > > > Durrant
> > > > > > >> <Paul.Durrant@citrix.com>; Stefano Stabellini
> > > > <sstabellini@kernel.org>;
> > > > > > >> Max Reitz <mreitz@redhat.com>; armbru@redhat.com
> > > > > > >> Subject: xen_disk qdevification (was: [PATCH 0/3] Performance
> > > > > > improvements
> > > > > > >> for xen_disk v2)
> > > > > > >>
> > > > > > >> Am 02.11.2018 um 11:00 hat Tim Smith geschrieben:
> > > > > > >> > A series of performance improvements for disks using the Xen
> > PV
> > > > ring.
> > > > > > >> >
> > > > > > >> > These have had fairly extensive testing.
> > > > > > >> >
> > > > > > >> > The batching and latency improvements together boost the
> > > > throughput
> > > > > > >> > of small reads and writes by two to six percent (measured
> > using
> > > > fio
> > > > > > >> > in the guest)
> > > > > > >> >
> > > > > > >> > Avoiding repeated calls to posix_memalign() reduced the dirty
> > > > heap
> > > > > > >> > from 25MB to 5MB in the case of a single datapath process
> > while
> > > > also
> > > > > > >> > improving performance.
> > > > > > >> >
> > > > > > >> > v2 removes some checkpatch complaints and fixes the CCs
> > > > > > >>
> > > > > > >> Completely unrelated, but since you're the first person
> > touching
> > > > > > >> xen_disk in a while, you're my victim:
> > > > > > >>
> > > > > > >> At KVM Forum we discussed sending a patch to deprecate xen_disk
> > > > because
> > > > > > >> after all those years, it still hasn't been converted to qdev.
> > > > Markus
> > > > > > is
> > > > > > >> currently fixing some other not yet qdevified block device, but
> > > > after
> > > > > > >> that xen_disk will be the only one left.
> > > > > > >>
> > > > > > >> A while ago, a downstream patch review found out that there are
> > > > some
> > > > > > QMP
> > > > > > >> commands that would immediately crash if a xen_disk device were
> > > > present
> > > > > > >> because of the lacking qdevification. This is not the code
> > > quality
> > > > > > >> standard I envision for QEMU. It's time for non-qdev devices to
> > > go.
> > > > > > >>
> > > > > > >> So if you guys are still interested in the device, could
> > someone
> > > > please
> > > > > > >> finally look into converting it?
> > > > > > >>
> > > > > > >
> > > > > > > I have a patch series to do exactly this. It's somewhat involved
> > > as
> > > > I
> > > > > > > need to convert the whole PV backend infrastructure. I will try
> > to
> > > > > > > rebase and clean up my series a.s.a.p.
> > > > > >
> > > > > > Awesome!  Please coordinate with Anthony Prerard to avoid
> > > duplicating
> > > > > > work if you haven't done so already.
> > > > >
> > > > > I've come across a bit of a problem that I'm not sure how best to
> > deal
> > > > > with and so am looking for some advice.
> > > > >
> > > > > I now have a qdevified PV disk backend but I can't bring it up
> > because
> > > > > it fails to acquire a write lock on the qcow2 it is pointing at.
> > This
> > > > > is because there is also an emulated IDE drive using the same qcow2.
> > > > > This does not appear to be a problem for the non-qdev xen-disk,
> > > > > presumably because it is not opening the qcow2 until the emulated
> > > > > device is unplugged and I don't really want to introduce similar
> > > > > hackery in my new backend (i.e. I want it to attach to its drive,
> > and
> > > > > hence open the qcow2, during realize).
> > > > >
> > > > > So, I'm not sure what to do... It is not a problem that both a PV
> > > > > backend and an emulated device are using the same qcow2 because they
> > > > > will never actually operate simultaneously so is there any way I can
> > > > > bypass the qcow2 lock check when I create the drive for my PV
> > backend?
> > > > > (BTW I tried re-using the drive created for the emulated device, but
> > > > > that doesn't work because there is a check if a drive is already
> > > > > attached to something).
> > > > >
> > > > > Any ideas?
> > > >
> > > > I think the clean solution is to keep the BlockBackend open in xen-
> > disk
> > > > from the beginning, but not requesting write permissions yet.
> > > >
> > > > The BlockBackend is created in parse_drive(), when qdev parses the
> > > > -device drive=... option. At this point, no permissions are requested
> > > > yet. That is done in blkconf_apply_backend_options(), which is
> > manually
> > > > called from the devices; specifically from ide_dev_initfn() in IDE,
> > and
> > > > I assume you call the function from xen-disk as well.
> > >
> > > Yes, I call it during realize.
> > >
> > > >
> > > > xen-disk should then call this function with readonly=true, and at the
> > > > point of the handover (when the IDE device is already gone) it can
> > call
> > > > blk_set_perm() to request BLK_PERM_WRITE in addition to the
> > permissions
> > > > it already holds.
> > > >
> > >
> > > I tried that and it works fine :-)
> > 
> > Unfortunately I spoke too soon... I still had a patch in place to disable
> > locking checks :-(
> > 
> > What I'm trying to do to maintain compatibility with the existing Xen
> > toolstack (which I think is the only feasible way to make the change
> > avoiding chicken and egg problems) is to use a 'compat' function that
> > creates a drive based on the information that the Xen toolstack writes
> > into xenstore. I'm using drive_new() to do this and it is this that fails.
> > 
> > So, I have tried setting BDRV_OPT_READ_ONLY and BDRV_OPT_FORCE_SHARE. This
> > allows me to get through drive_new() but later I fail to set the write
> > permission with error "Block node is read-only".

drive_new() is really a legacy interface. It immediately creates a
BlockBackend and takes permissions for it. You don't want that here.
(And I'd like it to go away in a few releases, so better don't let new
code rely on it.)

If you can, it would be better to just call qmp_blockdev_add(). This
creates only a node (BlockDriverState) without a BlockBackend. You'll
get your BlockBackend from the qdev drive property.

> > > > The other option I see would be that you simply create both devices
> > with
> > > > share-rw=on (which results in conf->share_rw == true and therefore
> > > > shared BLK_PERM_WRITE in blkconf_apply_backend_options()), but that
> > > > feels like a hack because you don't actually want to have two writers
> > at
> > > > the same time.
> > > >
> > >
> > > Yes, that does indeed seem like more of a hack. The first option works
> > so
> > > I'll go with that.
> > >
> > 
> > I'll now see what I can do with this idea.
> 
> I think I have a reasonably neat solution, as it is restricted to my
> compat code and can thus go away when the Xen toolstack is re-educated
> to use QMP to instantiate PV backends (once they are all qdevified). I
> simply add "file.locking=off" to the options I pass to drive_new().

I wouldn't agree on "neat", but if you think so...

Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  parent reply	other threads:[~2018-11-09 10:40 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-02 10:00 [Qemu-devel] [PATCH 0/3] Performance improvements for xen_disk v2 Tim Smith
2018-11-02 10:00 ` [PATCH 1/3] Improve xen_disk batching behaviour Tim Smith
2018-11-02 10:00 ` [Qemu-devel] " Tim Smith
2018-11-02 11:14   ` Paul Durrant
2018-11-02 11:14   ` Paul Durrant
2018-11-02 13:53   ` Anthony PERARD
2018-11-02 13:53   ` [Qemu-devel] " Anthony PERARD
2018-11-02 10:01 ` [Qemu-devel] [PATCH 2/3] Improve xen_disk response latency Tim Smith
2018-11-02 11:14   ` Paul Durrant
2018-11-02 11:14     ` Paul Durrant
2018-11-02 13:53   ` Anthony PERARD
2018-11-02 13:53   ` [Qemu-devel] " Anthony PERARD
2018-11-02 10:01 ` Tim Smith
2018-11-02 10:01 ` [PATCH 3/3] Avoid repeated memory allocation in xen_disk Tim Smith
2018-11-02 10:01 ` [Qemu-devel] " Tim Smith
2018-11-02 11:15   ` Paul Durrant
2018-11-02 11:15     ` Paul Durrant
2018-11-02 13:53   ` [Qemu-devel] " Anthony PERARD
2018-11-02 13:53     ` Anthony PERARD
2018-11-02 11:04 ` xen_disk qdevification (was: [PATCH 0/3] Performance improvements for xen_disk v2) Kevin Wolf
2018-11-02 11:04 ` [Qemu-devel] " Kevin Wolf
2018-11-02 11:13   ` Paul Durrant
2018-11-02 11:13   ` [Qemu-devel] " Paul Durrant
2018-11-02 12:14     ` Kevin Wolf
2018-11-02 12:14     ` [Qemu-devel] " Kevin Wolf
2018-11-05 15:57     ` [Qemu-devel] xen_disk qdevification Markus Armbruster
2018-11-05 15:57       ` Markus Armbruster
2018-11-05 16:15       ` Paul Durrant
2018-11-05 16:15         ` Paul Durrant
2018-11-08 14:00       ` Paul Durrant
2018-11-08 14:00       ` Paul Durrant
2018-11-08 15:21         ` Kevin Wolf
2018-11-08 15:43           ` Paul Durrant
2018-11-08 15:43           ` Paul Durrant
2018-11-08 16:44             ` Paul Durrant
2018-11-09 10:27               ` Paul Durrant
2018-11-09 10:40                 ` Kevin Wolf
2018-11-09 10:40                 ` Kevin Wolf [this message]
2018-11-09 10:27               ` Paul Durrant
2018-11-08 16:44             ` Paul Durrant
2018-11-08 15:21         ` Kevin Wolf
2018-12-12  8:59   ` [Qemu-devel] [Xen-devel] xen_disk qdevification (was: [PATCH 0/3] Performance improvements for xen_disk v2) Olaf Hering
2018-12-12  9:22     ` Paul Durrant
2018-12-12  9:22       ` Paul Durrant
2018-12-12 12:03     ` [Qemu-devel] [Xen-devel] " Kevin Wolf
2018-12-12 12:03     ` Kevin Wolf
2018-12-12 12:04     ` [Qemu-devel] xen_disk qdevification Markus Armbruster
2018-12-12 12:04     ` [Qemu-devel] [Xen-devel] " Markus Armbruster
2018-12-12  8:59   ` xen_disk qdevification (was: [PATCH 0/3] Performance improvements for xen_disk v2) Olaf Hering

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='20181109104006.GC4635__3708.19789027355$1541759939$gmane$org@dhcp-200-186.str.redhat.com' \
    --to=kwolf@redhat.com \
    --cc=Paul.Durrant@citrix.com \
    --cc=anthony.perard@citrix.com \
    --cc=armbru@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=sstabellini@kernel.org \
    --cc=tim.smith@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.