All of lore.kernel.org
 help / color / mirror / Atom feed
From: Steven Whitehouse <swhiteho@redhat.com>
To: ocfs2-devel@oss.oracle.com
Subject: [Ocfs2-devel] [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
Date: Fri, 11 Oct 2019 11:14:31 +0100	[thread overview]
Message-ID: <8d0de038-9127-3a95-ab1d-e0ac2cc483de@redhat.com> (raw)
In-Reply-To: <CH2PR18MB3206F78AC614EAC963AF8203CF970@CH2PR18MB3206.namprd18.prod.outlook.com>

Hi,

On 11/10/2019 08:21, Gang He wrote:
> Hello hayes,
>
>> -----Original Message-----
>> From: cluster-devel-bounces at redhat.com
>> [mailto:cluster-devel-bounces at redhat.com] On Behalf Of Hayes, Bill
>> Sent: 2019?10?11? 0:42
>> To: ocfs2-devel at oss.oracle.com; cluster-devel at redhat.com
>> Cc: Rocky (The good-looking one) Craig <rocky.craig@hpe.com>
>> Subject: [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
>>
>> We have been experimenting with distributed file systems across multiple
>> Linux instances connected to a shared block device.  In our setup, the "disk" is
>> not a legacy SAN or iSCSI.  Instead it is a shared memory-semantic fabric
>> that is being presented as a Linux block device.
>>
>> We have been working with both GFS2 and OCFS2 to evaluate the suitability
>> to work on our shared memory configuration.  Right now we have gotten
>> both GFS2 and OCFS2 to work with block driver but each file system still does
>> block copies.  Our goal is to extend mmap() of the file system(s) to allow true
>> zero-copy load/store access directly to the memory fabric.  We believe
>> adding DAX support into the OCFS2 and/or GFS2 is an expedient path to use a
>> block device that fronts our memory fabric with DAX.
>>
>> Based on the HW that OCFS2 and GFS2 were built for (iSCSI, FC, DRDB, etc)
>> there probably has been no reason to implement DAX to date.  The advent of
>> various memory semantic fabrics (Gen-Z, NUMAlink, etc) is driving our
>> interest in extending OCFS2 and/or GFS2 to take advantage of DAX.  We
>> have two platforms set up, one based on actual hardware and another based
>> on VMs and are eager to begin deeper work.
>>
>> Has there been any discussion or interest in DAX support in OCFS2?
> No, but I think this is very interesting topic/feature.
> I hope we can take some efforts in investigating how to make OCFS2 support DAX, since some local file systems have supported this feature for long time.

Well, I think it is more accurate to say that the feature has been 
evolving in local filesystems for some time. However, it is moving 
towards time where it makes sense to think about this for clustered 
filesystems, so this is a timely topic for discussion in that sense.


>> Is there interest from the OCFS2 development community to see DAX support
>> developed and put upstream?
> >From my personal view, it is very attractive.
> But we also aware cluster file systems are usually based on DLM, DLM usually communicates with each other via the network.
> That means network latency should be considered.
>
> Thanks
> Gang

Hopefully we can come up with a design which avoids the network latency, 
at least in most cases. With GFS2 direct_io for example, the locking is 
designed such that DLM lock requests are only needed in case of block 
allocation/deallocation. Extending the same concept to DAX should allow 
(after the initial page fault) true DSM via the filesystem. It may be 
able to do even better eventually, but that would be a good starting point.

It has not been something that the GFS2 developers have looked at in any 
detail recently, however it is something that would be interesting, and 
we'd be very happy for someone to work on this and send patches in due 
course,

Steve.


>
>> Has there been any discussion or interest in DAX support in GFS2?
>> Is there interest from the GFS2 development community to see DAX support
>> developed and put upstream?
>>
>> Regards,
>> Bill
>>
>
>

WARNING: multiple messages have this Message-ID (diff)
From: Steven Whitehouse <swhiteho@redhat.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
Date: Fri, 11 Oct 2019 11:14:31 +0100	[thread overview]
Message-ID: <8d0de038-9127-3a95-ab1d-e0ac2cc483de@redhat.com> (raw)
In-Reply-To: <CH2PR18MB3206F78AC614EAC963AF8203CF970@CH2PR18MB3206.namprd18.prod.outlook.com>

Hi,

On 11/10/2019 08:21, Gang He wrote:
> Hello hayes,
>
>> -----Original Message-----
>> From: cluster-devel-bounces at redhat.com
>> [mailto:cluster-devel-bounces at redhat.com] On Behalf Of Hayes, Bill
>> Sent: 2019?10?11? 0:42
>> To: ocfs2-devel at oss.oracle.com; cluster-devel at redhat.com
>> Cc: Rocky (The good-looking one) Craig <rocky.craig@hpe.com>
>> Subject: [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
>>
>> We have been experimenting with distributed file systems across multiple
>> Linux instances connected to a shared block device.  In our setup, the "disk" is
>> not a legacy SAN or iSCSI.  Instead it is a shared memory-semantic fabric
>> that is being presented as a Linux block device.
>>
>> We have been working with both GFS2 and OCFS2 to evaluate the suitability
>> to work on our shared memory configuration.  Right now we have gotten
>> both GFS2 and OCFS2 to work with block driver but each file system still does
>> block copies.  Our goal is to extend mmap() of the file system(s) to allow true
>> zero-copy load/store access directly to the memory fabric.  We believe
>> adding DAX support into the OCFS2 and/or GFS2 is an expedient path to use a
>> block device that fronts our memory fabric with DAX.
>>
>> Based on the HW that OCFS2 and GFS2 were built for (iSCSI, FC, DRDB, etc)
>> there probably has been no reason to implement DAX to date.  The advent of
>> various memory semantic fabrics (Gen-Z, NUMAlink, etc) is driving our
>> interest in extending OCFS2 and/or GFS2 to take advantage of DAX.  We
>> have two platforms set up, one based on actual hardware and another based
>> on VMs and are eager to begin deeper work.
>>
>> Has there been any discussion or interest in DAX support in OCFS2?
> No, but I think this is very interesting topic/feature.
> I hope we can take some efforts in investigating how to make OCFS2 support DAX, since some local file systems have supported this feature for long time.

Well, I think it is more accurate to say that the feature has been 
evolving in local filesystems for some time. However, it is moving 
towards time where it makes sense to think about this for clustered 
filesystems, so this is a timely topic for discussion in that sense.


>> Is there interest from the OCFS2 development community to see DAX support
>> developed and put upstream?
> >From my personal view, it is very attractive.
> But we also aware cluster file systems are usually based on DLM, DLM usually communicates with each other via the network.
> That means network latency should be considered.
>
> Thanks
> Gang

Hopefully we can come up with a design which avoids the network latency, 
at least in most cases. With GFS2 direct_io for example, the locking is 
designed such that DLM lock requests are only needed in case of block 
allocation/deallocation. Extending the same concept to DAX should allow 
(after the initial page fault) true DSM via the filesystem. It may be 
able to do even better eventually, but that would be a good starting point.

It has not been something that the GFS2 developers have looked at in any 
detail recently, however it is something that would be interesting, and 
we'd be very happy for someone to work on this and send patches in due 
course,

Steve.


>
>> Has there been any discussion or interest in DAX support in GFS2?
>> Is there interest from the GFS2 development community to see DAX support
>> developed and put upstream?
>>
>> Regards,
>> Bill
>>
>
>




  reply	other threads:[~2019-10-11 10:14 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-10 16:41 [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2? Hayes, Bill
2019-10-11  7:21 ` [Ocfs2-devel] " Gang He
2019-10-11  7:21   ` [Cluster-devel] " Gang He
2019-10-11 10:14   ` Steven Whitehouse [this message]
2019-10-11 10:14     ` Steven Whitehouse
2019-10-17 20:33 ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8d0de038-9127-3a95-ab1d-e0ac2cc483de@redhat.com \
    --to=swhiteho@redhat.com \
    --cc=ocfs2-devel@oss.oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.