All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: ashish mittal <ashmit602@gmail.com>
Cc: Jeff Cody <jcody@redhat.com>,
	qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	"Daniel P. Berrange" <berrange@redhat.com>,
	Ashish Mittal <ashish.mittal@veritas.com>,
	Stefan Hajnoczi <stefanha@gmail.com>,
	Ketan.Nilangekar@veritas.com, Abhijit.Dey@veritas.com
Subject: Re: [Qemu-devel] vxhs caching behaviour (was: [PATCH v4 RFC] block/vxhs: Initial commit to add) Veritas HyperScale VxHS block device support
Date: Fri, 9 Sep 2016 12:38:51 +0200	[thread overview]
Message-ID: <20160909103851.GA6682@noname.redhat.com> (raw)
In-Reply-To: <CAAo6VWNwYdCJ_TkM5a8Spnjv_CA2pr0rsxwdp7MxiTfdAyBdAg@mail.gmail.com>

Am 08.09.2016 um 22:46 hat ashish mittal geschrieben:
> Hi Kevin,
> 
> By design, our writeback cache is on non-volatile SSD device. We do
> async writes to this cache and also maintain a persistent index map of
> the data written. This gives us the capability to recover write-back
> cache if needed.

So your server application uses something like O_SYNC to write data to
the cache SSD, in order to avoid that data is sitting only in the kernel
page cache or in a volatile write cache of the SSD?

Kevin

> On Thu, Sep 8, 2016 at 7:20 AM, Kevin Wolf <kwolf@redhat.com> wrote:
> > Am 08.09.2016 um 16:00 hat Jeff Cody geschrieben:
> >> > >> +/*
> >> > >> + * This is called by QEMU when a flush gets triggered from within
> >> > >> + * a guest at the block layer, either for IDE or SCSI disks.
> >> > >> + */
> >> > >> +int vxhs_co_flush(BlockDriverState *bs)
> >> > >> +{
> >> > >> +    BDRVVXHSState *s = bs->opaque;
> >> > >> +    uint64_t size = 0;
> >> > >> +    int ret = 0;
> >> > >> +
> >> > >> +    ret = qemu_iio_ioctl(s->qnio_ctx,
> >> > >> +            s->vdisk_hostinfo[s->vdisk_cur_host_idx].vdisk_rfd,
> >> > >> +            VDISK_AIO_FLUSH, &size, NULL, IIO_FLAG_SYNC);
> >> > >> +
> >> > >> +    if (ret < 0) {
> >> > >> +        /*
> >> > >> +         * Currently not handling the flush ioctl
> >> > >> +         * failure because of network connection
> >> > >> +         * disconnect. Since all the writes are
> >> > >> +         * commited into persistent storage hence
> >> > >> +         * this flush call is noop and we can safely
> >> > >> +         * return success status to the caller.
> >> > >
> >> > > I'm not sure I understand here.  Are you saying the qemu_iio_ioctl() call
> >> > > above is a noop?
> >> > >
> >> >
> >> > Yes, qemu_iio_ioctl(VDISK_AIO_FLUSH) is only a place-holder at present
> >> > in case we later want to add some functionality to it. I have now
> >> > added a comment to this affect to avoid any confusion.
> >> >
> >>
> >> The problem is you don't know which version of the qnio library any given
> >> QEMU binary will be using, since it is a shared library.  Future versions
> >> may implement the flush ioctl as expressed above, in which case we may hide
> >> a valid error.
> >>
> >> Am I correct in assuming that this call suppresses errors because an error
> >> is returned for an unknown ioctl operation of VDISK_AIO_FLUSH?  If so, and
> >> you want a placeholder here for flushing, you should go all the way and stub
> >> out the underlying ioctl call to return success.  Then QEMU can at least
> >> rely on the error return from the flush operation.
> >
> > So what's the story behind the missing flush command?
> >
> > Does the server always use something like O_SYNC, i.e. all potential
> > write caches in the stack operate in a writethrough mode? So each write
> > request is only completed successfully if it is ensured that the data is
> > safe on disk rather than in a volatile writeback cache?
> >
> > As soon as any writeback cache can be involved (e.g. the kernel page
> > cache or a volatile disk cache) and there is no flush command (a real
> > one, not just stubbed), the driver is not operating correctly and
> > therefore not ready for inclusion.
> >
> > So Ashish, can you tell us something about caching behaviour across the
> > storage stack when vxhs is involved?
> >
> > Kevin

  reply	other threads:[~2016-09-09 10:39 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-23  6:56 [Qemu-devel] [PATCH v4 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support Ashish Mittal
2016-08-30 17:35 ` Jeff Cody
2016-09-07 22:32   ` ashish mittal
2016-09-08  8:43     ` Kevin Wolf
2016-09-08  8:49       ` Daniel P. Berrange
2016-09-08  9:29         ` Kevin Wolf
2016-09-08  9:34           ` Daniel P. Berrange
2016-09-08 12:48             ` Jeff Cody
2016-09-08  9:43           ` Daniel P. Berrange
2016-09-08 10:18             ` Kevin Wolf
2016-09-08 10:23               ` Paolo Bonzini
2016-09-08 14:00     ` Jeff Cody
2016-09-08 14:20       ` [Qemu-devel] vxhs caching behaviour (was: [PATCH v4 RFC] block/vxhs: Initial commit to add) " Kevin Wolf
2016-09-08 20:46         ` ashish mittal
2016-09-09 10:38           ` Kevin Wolf [this message]
2016-09-08 23:15       ` [Qemu-devel] [PATCH v4 RFC] block/vxhs: Initial commit to add " ashish mittal
2016-09-21  6:39         ` ashish mittal
2016-09-06 10:09 ` Fam Zheng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160909103851.GA6682@noname.redhat.com \
    --to=kwolf@redhat.com \
    --cc=Abhijit.Dey@veritas.com \
    --cc=Ketan.Nilangekar@veritas.com \
    --cc=armbru@redhat.com \
    --cc=ashish.mittal@veritas.com \
    --cc=ashmit602@gmail.com \
    --cc=berrange@redhat.com \
    --cc=jcody@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.