All of lore.kernel.org
 help / color / mirror / Atom feed
* a question about use of CEPH_IOC_SYNCIO in write
@ 2017-09-01 14:24 sa514164-fOMaevN1BEbsJZF79Ady7g
       [not found] ` <22be9a24.45942.15e3dd4a1bf.Coremail.sa514164-fOMaevN1BEbsJZF79Ady7g@public.gmane.org>
  0 siblings, 1 reply; 2+ messages in thread
From: sa514164-fOMaevN1BEbsJZF79Ady7g @ 2017-09-01 14:24 UTC (permalink / raw)
  To: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ

Hi:
        I want to ask a question about CEPH_IOC_SYNCIO flag.
        I know that when using O_SYNC flag or O_DIRECT flag, write call executes in other two code paths different than using CEPH_IOC_SYNCIO flag.
        And I find the comments about CEPH_IOC_SYNCIO here:

        /*
         * CEPH_IOC_SYNCIO - force synchronous IO
         *
         * This ioctl sets a file flag that forces the synchronous IO that
         * bypasses the page cache, even if it is not necessary.  This is
         * essentially the opposite behavior of IOC_LAZYIO.  This forces the
         * same read/write path as a file opened by multiple clients when one
         * or more of those clients is opened for write.
         *
         * Note that this type of sync IO takes a different path than a file
         * opened with O_SYNC/D_SYNC (writes hit the page cache and are
         * immediately flushed on page boundaries).  It is very similar to
         * O_DIRECT (writes bypass the page cache) except that O_DIRECT writes
         * are not copied (user page must remain stable) and O_DIRECT writes
         * have alignment restrictions (on the buffer and file offset).
         */
        #define CEPH_IOC_SYNCIO _IO(CEPH_IOCTL_MAGIC, 5)

        My question is: 
        1."This forces the same read/write path as a file opened by multiple clients when one or more of those clients is opened for write." -- Does this mean multiple clients can execute in the same code path when they all use the CEPH_IOC_SYNCIO flag? Will the use of CEPH_IOC_SYNCIO in all clients bring effects such as coherency and performance?
        2."...except that O_DIRECT writes are not copied (user page must remain stable)" -- As I know when threads write with CEPH_IOC_SYNCIO flag, the write call will block until ceph osd and mds send back responses. So even with CEPH_IOC_SYNCIO flag(the user pages are not locked here, I guess), but the user cannot use these pages. How can the use of CEPH_IOC_SYNCIO flag make better use of user space memory?

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: a question about use of CEPH_IOC_SYNCIO in write
       [not found] ` <22be9a24.45942.15e3dd4a1bf.Coremail.sa514164-fOMaevN1BEbsJZF79Ady7g@public.gmane.org>
@ 2017-09-05 20:29   ` Gregory Farnum
  0 siblings, 0 replies; 2+ messages in thread
From: Gregory Farnum @ 2017-09-05 20:29 UTC (permalink / raw)
  To: sa514164-fOMaevN1BEbsJZF79Ady7g
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users

On Fri, Sep 1, 2017 at 7:24 AM,  <sa514164-fOMaevN1BEbsJZF79Ady7g@public.gmane.org> wrote:
> Hi:
>         I want to ask a question about CEPH_IOC_SYNCIO flag.
>         I know that when using O_SYNC flag or O_DIRECT flag, write call executes in other two code paths different than using CEPH_IOC_SYNCIO flag.
>         And I find the comments about CEPH_IOC_SYNCIO here:
>
>         /*
>          * CEPH_IOC_SYNCIO - force synchronous IO
>          *
>          * This ioctl sets a file flag that forces the synchronous IO that
>          * bypasses the page cache, even if it is not necessary.  This is
>          * essentially the opposite behavior of IOC_LAZYIO.  This forces the
>          * same read/write path as a file opened by multiple clients when one
>          * or more of those clients is opened for write.
>          *
>          * Note that this type of sync IO takes a different path than a file
>          * opened with O_SYNC/D_SYNC (writes hit the page cache and are
>          * immediately flushed on page boundaries).  It is very similar to
>          * O_DIRECT (writes bypass the page cache) except that O_DIRECT writes
>          * are not copied (user page must remain stable) and O_DIRECT writes
>          * have alignment restrictions (on the buffer and file offset).
>          */
>         #define CEPH_IOC_SYNCIO _IO(CEPH_IOCTL_MAGIC, 5)
>
>         My question is:
>         1."This forces the same read/write path as a file opened by multiple clients when one or more of those clients is opened for write." -- Does this mean multiple clients can execute in the same code path when they all use the CEPH_IOC_SYNCIO flag? Will the use of CEPH_IOC_SYNCIO in all clients bring effects such as coherency and performance?

If you're just using the normal interfaces, you don't need to play
around with this. I *think* this ioctl is only so that if you are
using lazyio (which disables the usual cache coherence), you can still
get get data IO which is coordinated with other clients.

>         2."...except that O_DIRECT writes are not copied (user page must remain stable)" -- As I know when threads write with CEPH_IOC_SYNCIO flag, the write call will block until ceph osd and mds send back responses. So even with CEPH_IOC_SYNCIO flag(the user pages are not locked here, I guess), but the user cannot use these pages. How can the use of CEPH_IOC_SYNCIO flag make better use of user space memory?

I'm not very familiar with these mechanisms, but I think it's saying
that if you use CEPH_IOC_SYNCIO in an async IO interface, once the
async write returns then it will have done an internal copy and can
use the pages again?
Not really sure...
-Greg

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-09-05 20:29 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-01 14:24 a question about use of CEPH_IOC_SYNCIO in write sa514164-fOMaevN1BEbsJZF79Ady7g
     [not found] ` <22be9a24.45942.15e3dd4a1bf.Coremail.sa514164-fOMaevN1BEbsJZF79Ady7g@public.gmane.org>
2017-09-05 20:29   ` Gregory Farnum

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.