All of lore.kernel.org
 help / color / mirror / Atom feed
From: Gang He <GHe@suse.com>
To: ocfs2-devel@oss.oracle.com
Subject: [Ocfs2-devel] Interest in DAX for OCFS2 and/or GFS2?
Date: Fri, 11 Oct 2019 07:21:35 +0000	[thread overview]
Message-ID: <CH2PR18MB3206F78AC614EAC963AF8203CF970@CH2PR18MB3206.namprd18.prod.outlook.com> (raw)
In-Reply-To: <AT5PR8401MB1025DFAE60F5A5A4BCC5D2038B940@AT5PR8401MB1025.NAMPRD84.PROD.OUTLOOK.COM>

Hello hayes,

> -----Original Message-----
> From: cluster-devel-bounces at redhat.com
> [mailto:cluster-devel-bounces at redhat.com] On Behalf Of Hayes, Bill
> Sent: 2019?10?11? 0:42
> To: ocfs2-devel at oss.oracle.com; cluster-devel at redhat.com
> Cc: Rocky (The good-looking one) Craig <rocky.craig@hpe.com>
> Subject: [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
> 
> We have been experimenting with distributed file systems across multiple
> Linux instances connected to a shared block device.  In our setup, the "disk" is
> not a legacy SAN or iSCSI.  Instead it is a shared memory-semantic fabric
> that is being presented as a Linux block device.
> 
> We have been working with both GFS2 and OCFS2 to evaluate the suitability
> to work on our shared memory configuration.  Right now we have gotten
> both GFS2 and OCFS2 to work with block driver but each file system still does
> block copies.  Our goal is to extend mmap() of the file system(s) to allow true
> zero-copy load/store access directly to the memory fabric.  We believe
> adding DAX support into the OCFS2 and/or GFS2 is an expedient path to use a
> block device that fronts our memory fabric with DAX.
> 
> Based on the HW that OCFS2 and GFS2 were built for (iSCSI, FC, DRDB, etc)
> there probably has been no reason to implement DAX to date.  The advent of
> various memory semantic fabrics (Gen-Z, NUMAlink, etc) is driving our
> interest in extending OCFS2 and/or GFS2 to take advantage of DAX.  We
> have two platforms set up, one based on actual hardware and another based
> on VMs and are eager to begin deeper work.
> 
> Has there been any discussion or interest in DAX support in OCFS2?
No, but I think this is very interesting topic/feature.
I hope we can take some efforts in investigating how to make OCFS2 support DAX, since some local file systems have supported this feature for long time.

> Is there interest from the OCFS2 development community to see DAX support
> developed and put upstream?
From my personal view, it is very attractive.
But we also aware cluster file systems are usually based on DLM, DLM usually communicates with each other via the network.
That means network latency should be considered.

Thanks
Gang

> 
> Has there been any discussion or interest in DAX support in GFS2?
> Is there interest from the GFS2 development community to see DAX support
> developed and put upstream?
> 
> Regards,
> Bill
> 

WARNING: multiple messages have this Message-ID (diff)
From: Gang He <GHe@suse.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
Date: Fri, 11 Oct 2019 07:21:35 +0000	[thread overview]
Message-ID: <CH2PR18MB3206F78AC614EAC963AF8203CF970@CH2PR18MB3206.namprd18.prod.outlook.com> (raw)
In-Reply-To: <AT5PR8401MB1025DFAE60F5A5A4BCC5D2038B940@AT5PR8401MB1025.NAMPRD84.PROD.OUTLOOK.COM>

Hello hayes,

> -----Original Message-----
> From: cluster-devel-bounces at redhat.com
> [mailto:cluster-devel-bounces at redhat.com] On Behalf Of Hayes, Bill
> Sent: 2019?10?11? 0:42
> To: ocfs2-devel at oss.oracle.com; cluster-devel at redhat.com
> Cc: Rocky (The good-looking one) Craig <rocky.craig@hpe.com>
> Subject: [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
> 
> We have been experimenting with distributed file systems across multiple
> Linux instances connected to a shared block device.  In our setup, the "disk" is
> not a legacy SAN or iSCSI.  Instead it is a shared memory-semantic fabric
> that is being presented as a Linux block device.
> 
> We have been working with both GFS2 and OCFS2 to evaluate the suitability
> to work on our shared memory configuration.  Right now we have gotten
> both GFS2 and OCFS2 to work with block driver but each file system still does
> block copies.  Our goal is to extend mmap() of the file system(s) to allow true
> zero-copy load/store access directly to the memory fabric.  We believe
> adding DAX support into the OCFS2 and/or GFS2 is an expedient path to use a
> block device that fronts our memory fabric with DAX.
> 
> Based on the HW that OCFS2 and GFS2 were built for (iSCSI, FC, DRDB, etc)
> there probably has been no reason to implement DAX to date.  The advent of
> various memory semantic fabrics (Gen-Z, NUMAlink, etc) is driving our
> interest in extending OCFS2 and/or GFS2 to take advantage of DAX.  We
> have two platforms set up, one based on actual hardware and another based
> on VMs and are eager to begin deeper work.
> 
> Has there been any discussion or interest in DAX support in OCFS2?
No, but I think this is very interesting topic/feature.
I hope we can take some efforts in investigating how to make OCFS2 support DAX, since some local file systems have supported this feature for long time.

> Is there interest from the OCFS2 development community to see DAX support
> developed and put upstream?
From my personal view, it is very attractive.
But we also aware cluster file systems are usually based on DLM, DLM usually communicates with each other via the network.
That means network latency should be considered.

Thanks
Gang

> 
> Has there been any discussion or interest in DAX support in GFS2?
> Is there interest from the GFS2 development community to see DAX support
> developed and put upstream?
> 
> Regards,
> Bill
> 




  reply	other threads:[~2019-10-11  7:21 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-10 16:41 [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2? Hayes, Bill
2019-10-11  7:21 ` Gang He [this message]
2019-10-11  7:21   ` Gang He
2019-10-11 10:14   ` [Ocfs2-devel] " Steven Whitehouse
2019-10-11 10:14     ` Steven Whitehouse
2019-10-17 20:33 ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CH2PR18MB3206F78AC614EAC963AF8203CF970@CH2PR18MB3206.namprd18.prod.outlook.com \
    --to=ghe@suse.com \
    --cc=ocfs2-devel@oss.oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.