* [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
@ 2019-10-10 16:41 Hayes, Bill
2019-10-11 7:21 ` [Cluster-devel] " Gang He
2019-10-17 20:33 ` Dave Chinner
0 siblings, 2 replies; 6+ messages in thread
From: Hayes, Bill @ 2019-10-10 16:41 UTC (permalink / raw)
To: cluster-devel.redhat.com
We have been experimenting with distributed file systems across multiple Linux instances connected to a shared block device. In our setup, the "disk" is not a legacy SAN or iSCSI. Instead it is a shared memory-semantic fabric that is being presented as a Linux block device.
We have been working with both GFS2 and OCFS2 to evaluate the suitability to work on our shared memory configuration. Right now we have gotten both GFS2 and OCFS2 to work with block driver but each file system still does block copies. Our goal is to extend mmap() of the file system(s) to allow true zero-copy load/store access directly to the memory fabric. We believe adding DAX support into the OCFS2 and/or GFS2 is an expedient path to use a block device that fronts our memory fabric with DAX.
Based on the HW that OCFS2 and GFS2 were built for (iSCSI, FC, DRDB, etc) there probably has been no reason to implement DAX to date. The advent of various memory semantic fabrics (Gen-Z, NUMAlink, etc) is driving our interest in extending OCFS2 and/or GFS2 to take advantage of DAX. We have two platforms set up, one based on actual hardware and another based on VMs and are eager to begin deeper work.
Has there been any discussion or interest in DAX support in OCFS2?
Is there interest from the OCFS2 development community to see DAX support developed and put upstream?
Has there been any discussion or interest in DAX support in GFS2?
Is there interest from the GFS2 development community to see DAX support developed and put upstream?
Regards,
Bill
^ permalink raw reply [flat|nested] 6+ messages in thread
* [Ocfs2-devel] Interest in DAX for OCFS2 and/or GFS2?
2019-10-10 16:41 [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2? Hayes, Bill
@ 2019-10-11 7:21 ` Gang He
2019-10-17 20:33 ` Dave Chinner
1 sibling, 0 replies; 6+ messages in thread
From: Gang He @ 2019-10-11 7:21 UTC (permalink / raw)
To: ocfs2-devel
Hello hayes,
> -----Original Message-----
> From: cluster-devel-bounces at redhat.com
> [mailto:cluster-devel-bounces at redhat.com] On Behalf Of Hayes, Bill
> Sent: 2019?10?11? 0:42
> To: ocfs2-devel at oss.oracle.com; cluster-devel at redhat.com
> Cc: Rocky (The good-looking one) Craig <rocky.craig@hpe.com>
> Subject: [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
>
> We have been experimenting with distributed file systems across multiple
> Linux instances connected to a shared block device. In our setup, the "disk" is
> not a legacy SAN or iSCSI. Instead it is a shared memory-semantic fabric
> that is being presented as a Linux block device.
>
> We have been working with both GFS2 and OCFS2 to evaluate the suitability
> to work on our shared memory configuration. Right now we have gotten
> both GFS2 and OCFS2 to work with block driver but each file system still does
> block copies. Our goal is to extend mmap() of the file system(s) to allow true
> zero-copy load/store access directly to the memory fabric. We believe
> adding DAX support into the OCFS2 and/or GFS2 is an expedient path to use a
> block device that fronts our memory fabric with DAX.
>
> Based on the HW that OCFS2 and GFS2 were built for (iSCSI, FC, DRDB, etc)
> there probably has been no reason to implement DAX to date. The advent of
> various memory semantic fabrics (Gen-Z, NUMAlink, etc) is driving our
> interest in extending OCFS2 and/or GFS2 to take advantage of DAX. We
> have two platforms set up, one based on actual hardware and another based
> on VMs and are eager to begin deeper work.
>
> Has there been any discussion or interest in DAX support in OCFS2?
No, but I think this is very interesting topic/feature.
I hope we can take some efforts in investigating how to make OCFS2 support DAX, since some local file systems have supported this feature for long time.
> Is there interest from the OCFS2 development community to see DAX support
> developed and put upstream?
From my personal view, it is very attractive.
But we also aware cluster file systems are usually based on DLM, DLM usually communicates with each other via the network.
That means network latency should be considered.
Thanks
Gang
>
> Has there been any discussion or interest in DAX support in GFS2?
> Is there interest from the GFS2 development community to see DAX support
> developed and put upstream?
>
> Regards,
> Bill
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
@ 2019-10-11 7:21 ` Gang He
0 siblings, 0 replies; 6+ messages in thread
From: Gang He @ 2019-10-11 7:21 UTC (permalink / raw)
To: cluster-devel.redhat.com
Hello hayes,
> -----Original Message-----
> From: cluster-devel-bounces at redhat.com
> [mailto:cluster-devel-bounces at redhat.com] On Behalf Of Hayes, Bill
> Sent: 2019?10?11? 0:42
> To: ocfs2-devel at oss.oracle.com; cluster-devel at redhat.com
> Cc: Rocky (The good-looking one) Craig <rocky.craig@hpe.com>
> Subject: [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
>
> We have been experimenting with distributed file systems across multiple
> Linux instances connected to a shared block device. In our setup, the "disk" is
> not a legacy SAN or iSCSI. Instead it is a shared memory-semantic fabric
> that is being presented as a Linux block device.
>
> We have been working with both GFS2 and OCFS2 to evaluate the suitability
> to work on our shared memory configuration. Right now we have gotten
> both GFS2 and OCFS2 to work with block driver but each file system still does
> block copies. Our goal is to extend mmap() of the file system(s) to allow true
> zero-copy load/store access directly to the memory fabric. We believe
> adding DAX support into the OCFS2 and/or GFS2 is an expedient path to use a
> block device that fronts our memory fabric with DAX.
>
> Based on the HW that OCFS2 and GFS2 were built for (iSCSI, FC, DRDB, etc)
> there probably has been no reason to implement DAX to date. The advent of
> various memory semantic fabrics (Gen-Z, NUMAlink, etc) is driving our
> interest in extending OCFS2 and/or GFS2 to take advantage of DAX. We
> have two platforms set up, one based on actual hardware and another based
> on VMs and are eager to begin deeper work.
>
> Has there been any discussion or interest in DAX support in OCFS2?
No, but I think this is very interesting topic/feature.
I hope we can take some efforts in investigating how to make OCFS2 support DAX, since some local file systems have supported this feature for long time.
> Is there interest from the OCFS2 development community to see DAX support
> developed and put upstream?
From my personal view, it is very attractive.
But we also aware cluster file systems are usually based on DLM, DLM usually communicates with each other via the network.
That means network latency should be considered.
Thanks
Gang
>
> Has there been any discussion or interest in DAX support in GFS2?
> Is there interest from the GFS2 development community to see DAX support
> developed and put upstream?
>
> Regards,
> Bill
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [Ocfs2-devel] [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
2019-10-11 7:21 ` [Cluster-devel] " Gang He
@ 2019-10-11 10:14 ` Steven Whitehouse
-1 siblings, 0 replies; 6+ messages in thread
From: Steven Whitehouse @ 2019-10-11 10:14 UTC (permalink / raw)
To: ocfs2-devel
Hi,
On 11/10/2019 08:21, Gang He wrote:
> Hello hayes,
>
>> -----Original Message-----
>> From: cluster-devel-bounces at redhat.com
>> [mailto:cluster-devel-bounces at redhat.com] On Behalf Of Hayes, Bill
>> Sent: 2019?10?11? 0:42
>> To: ocfs2-devel at oss.oracle.com; cluster-devel at redhat.com
>> Cc: Rocky (The good-looking one) Craig <rocky.craig@hpe.com>
>> Subject: [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
>>
>> We have been experimenting with distributed file systems across multiple
>> Linux instances connected to a shared block device. In our setup, the "disk" is
>> not a legacy SAN or iSCSI. Instead it is a shared memory-semantic fabric
>> that is being presented as a Linux block device.
>>
>> We have been working with both GFS2 and OCFS2 to evaluate the suitability
>> to work on our shared memory configuration. Right now we have gotten
>> both GFS2 and OCFS2 to work with block driver but each file system still does
>> block copies. Our goal is to extend mmap() of the file system(s) to allow true
>> zero-copy load/store access directly to the memory fabric. We believe
>> adding DAX support into the OCFS2 and/or GFS2 is an expedient path to use a
>> block device that fronts our memory fabric with DAX.
>>
>> Based on the HW that OCFS2 and GFS2 were built for (iSCSI, FC, DRDB, etc)
>> there probably has been no reason to implement DAX to date. The advent of
>> various memory semantic fabrics (Gen-Z, NUMAlink, etc) is driving our
>> interest in extending OCFS2 and/or GFS2 to take advantage of DAX. We
>> have two platforms set up, one based on actual hardware and another based
>> on VMs and are eager to begin deeper work.
>>
>> Has there been any discussion or interest in DAX support in OCFS2?
> No, but I think this is very interesting topic/feature.
> I hope we can take some efforts in investigating how to make OCFS2 support DAX, since some local file systems have supported this feature for long time.
Well, I think it is more accurate to say that the feature has been
evolving in local filesystems for some time. However, it is moving
towards time where it makes sense to think about this for clustered
filesystems, so this is a timely topic for discussion in that sense.
>> Is there interest from the OCFS2 development community to see DAX support
>> developed and put upstream?
> >From my personal view, it is very attractive.
> But we also aware cluster file systems are usually based on DLM, DLM usually communicates with each other via the network.
> That means network latency should be considered.
>
> Thanks
> Gang
Hopefully we can come up with a design which avoids the network latency,
at least in most cases. With GFS2 direct_io for example, the locking is
designed such that DLM lock requests are only needed in case of block
allocation/deallocation. Extending the same concept to DAX should allow
(after the initial page fault) true DSM via the filesystem. It may be
able to do even better eventually, but that would be a good starting point.
It has not been something that the GFS2 developers have looked at in any
detail recently, however it is something that would be interesting, and
we'd be very happy for someone to work on this and send patches in due
course,
Steve.
>
>> Has there been any discussion or interest in DAX support in GFS2?
>> Is there interest from the GFS2 development community to see DAX support
>> developed and put upstream?
>>
>> Regards,
>> Bill
>>
>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
@ 2019-10-11 10:14 ` Steven Whitehouse
0 siblings, 0 replies; 6+ messages in thread
From: Steven Whitehouse @ 2019-10-11 10:14 UTC (permalink / raw)
To: cluster-devel.redhat.com
Hi,
On 11/10/2019 08:21, Gang He wrote:
> Hello hayes,
>
>> -----Original Message-----
>> From: cluster-devel-bounces at redhat.com
>> [mailto:cluster-devel-bounces at redhat.com] On Behalf Of Hayes, Bill
>> Sent: 2019?10?11? 0:42
>> To: ocfs2-devel at oss.oracle.com; cluster-devel at redhat.com
>> Cc: Rocky (The good-looking one) Craig <rocky.craig@hpe.com>
>> Subject: [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
>>
>> We have been experimenting with distributed file systems across multiple
>> Linux instances connected to a shared block device. In our setup, the "disk" is
>> not a legacy SAN or iSCSI. Instead it is a shared memory-semantic fabric
>> that is being presented as a Linux block device.
>>
>> We have been working with both GFS2 and OCFS2 to evaluate the suitability
>> to work on our shared memory configuration. Right now we have gotten
>> both GFS2 and OCFS2 to work with block driver but each file system still does
>> block copies. Our goal is to extend mmap() of the file system(s) to allow true
>> zero-copy load/store access directly to the memory fabric. We believe
>> adding DAX support into the OCFS2 and/or GFS2 is an expedient path to use a
>> block device that fronts our memory fabric with DAX.
>>
>> Based on the HW that OCFS2 and GFS2 were built for (iSCSI, FC, DRDB, etc)
>> there probably has been no reason to implement DAX to date. The advent of
>> various memory semantic fabrics (Gen-Z, NUMAlink, etc) is driving our
>> interest in extending OCFS2 and/or GFS2 to take advantage of DAX. We
>> have two platforms set up, one based on actual hardware and another based
>> on VMs and are eager to begin deeper work.
>>
>> Has there been any discussion or interest in DAX support in OCFS2?
> No, but I think this is very interesting topic/feature.
> I hope we can take some efforts in investigating how to make OCFS2 support DAX, since some local file systems have supported this feature for long time.
Well, I think it is more accurate to say that the feature has been
evolving in local filesystems for some time. However, it is moving
towards time where it makes sense to think about this for clustered
filesystems, so this is a timely topic for discussion in that sense.
>> Is there interest from the OCFS2 development community to see DAX support
>> developed and put upstream?
> >From my personal view, it is very attractive.
> But we also aware cluster file systems are usually based on DLM, DLM usually communicates with each other via the network.
> That means network latency should be considered.
>
> Thanks
> Gang
Hopefully we can come up with a design which avoids the network latency,
at least in most cases. With GFS2 direct_io for example, the locking is
designed such that DLM lock requests are only needed in case of block
allocation/deallocation. Extending the same concept to DAX should allow
(after the initial page fault) true DSM via the filesystem. It may be
able to do even better eventually, but that would be a good starting point.
It has not been something that the GFS2 developers have looked at in any
detail recently, however it is something that would be interesting, and
we'd be very happy for someone to work on this and send patches in due
course,
Steve.
>
>> Has there been any discussion or interest in DAX support in GFS2?
>> Is there interest from the GFS2 development community to see DAX support
>> developed and put upstream?
>>
>> Regards,
>> Bill
>>
>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2?
2019-10-10 16:41 [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2? Hayes, Bill
2019-10-11 7:21 ` [Cluster-devel] " Gang He
@ 2019-10-17 20:33 ` Dave Chinner
1 sibling, 0 replies; 6+ messages in thread
From: Dave Chinner @ 2019-10-17 20:33 UTC (permalink / raw)
To: cluster-devel.redhat.com
On Thu, Oct 10, 2019 at 04:41:56PM +0000, Hayes, Bill wrote:
> Has there been any discussion or interest in DAX support in OCFS2?
> Is there interest from the OCFS2 development community to see DAX support developed and put upstream?
>
> Has there been any discussion or interest in DAX support in GFS2?
> Is there interest from the GFS2 development community to see DAX support developed and put upstream?
You're probably best from a DAX implementation POV to head down the
GFS2 path, as FS-DAX requires the filesystem to use the fs/iomap/
extent mapping implementation. GFS2 is already partially ported to
use the iomap infrastructure, though more work is needed to provide
the iomap functionality DAX requires. OCFS2 would require a lot more
work on this front....
Cheers,
Dave.
--
Dave Chinner
dchinner at redhat.com
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2019-10-17 20:33 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-10 16:41 [Cluster-devel] Interest in DAX for OCFS2 and/or GFS2? Hayes, Bill
2019-10-11 7:21 ` [Ocfs2-devel] " Gang He
2019-10-11 7:21 ` [Cluster-devel] " Gang He
2019-10-11 10:14 ` [Ocfs2-devel] " Steven Whitehouse
2019-10-11 10:14 ` Steven Whitehouse
2019-10-17 20:33 ` Dave Chinner
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.