All of lore.kernel.org
 help / color / mirror / Atom feed
* [LSF/MM TOPIC] SMR: Disrupting recording technology meriting a new class of storage device
@ 2014-02-01  2:24 Albert Chen
  2014-02-07 13:00 ` Carlos Maiolino
  0 siblings, 1 reply; 6+ messages in thread
From: Albert Chen @ 2014-02-01  2:24 UTC (permalink / raw)
  To: lsf-pc, James Borden, Jim Malina, Curtis Stevens
  Cc: linux-ide, linux-fsdevel, linux-scsi

[LSF/MM TOPIC] SMR: Disrupting recording technology meriting a new class of storage device

Shingle Magnetic Recording is a disruptive technology that delivers the next areal density gain for the HDD industry by partially overlapping tracks. Shingling requires physical writes to be sequential, and opens the question of how to address this behavior at a system level. Two general approaches contemplated are to either to do the block management in the device or in the host storage stack/file system through Zone Block Commands (ZBC).

The use of ZBC to handle SMR block management yields several benefits such as:
- Predictable performance and latency
- Faster development time
- Access to application and system level semantic information
- Scalability / Fewer Drive Resources
- Higher reliability

Essential to a host managed approach (ZBC) is the openness of Linux and its community is a good place for WD to validate and seek feedback for our thinking - where in the Linux system stack is the best place to add ZBC handling? at the Device Mapper layer? or somewhere else in the storage stack? New ideas and comments are appreciated.

For more information about ZBC, please refer to Ted's <tytso@MIT.EDU> email to linux-fsdevel@vger.kernel.org with the subject " [RFC] Draft Linux kernel interfaces for ZBC drives".

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [LSF/MM TOPIC] SMR: Disrupting recording technology meriting a new class of storage device
  2014-02-01  2:24 [LSF/MM TOPIC] SMR: Disrupting recording technology meriting a new class of storage device Albert Chen
@ 2014-02-07 13:00 ` Carlos Maiolino
  2014-02-07 13:46   ` Hannes Reinecke
  0 siblings, 1 reply; 6+ messages in thread
From: Carlos Maiolino @ 2014-02-07 13:00 UTC (permalink / raw)
  To: Albert Chen
  Cc: lsf-pc, James Borden, Jim Malina, Curtis Stevens, linux-ide,
	linux-fsdevel, linux-scsi

Hi,

On Sat, Feb 01, 2014 at 02:24:33AM +0000, Albert Chen wrote:
> [LSF/MM TOPIC] SMR: Disrupting recording technology meriting a new class of storage device
> 
> Shingle Magnetic Recording is a disruptive technology that delivers the next areal density gain for the HDD industry by partially overlapping tracks. Shingling requires physical writes to be sequential, and opens the question of how to address this behavior at a system level. Two general approaches contemplated are to either to do the block management in the device or in the host storage stack/file system through Zone Block Commands (ZBC).
> 
> The use of ZBC to handle SMR block management yields several benefits such as:
> - Predictable performance and latency
> - Faster development time
> - Access to application and system level semantic information
> - Scalability / Fewer Drive Resources
> - Higher reliability
> 
> Essential to a host managed approach (ZBC) is the openness of Linux and its community is a good place for WD to validate and seek feedback for our thinking - where in the Linux system stack is the best place to add ZBC handling? at the Device Mapper layer? or somewhere else in the storage stack? New ideas and comments are appreciated.

If you add ZBC handling into the device-mapper layer, aren't you supposing that
all SMR devices will be managed by device-mapper? This doesn't look right IMHO.
These devices should be able to be managed via DM or either directly via de
storage layer. And any other layers making use of these devices (like DM for
example) should be able to communicate with them and send ZBC commands as
needed.

> 
> For more information about ZBC, please refer to Ted's <tytso@MIT.EDU> email to linux-fsdevel@vger.kernel.org with the subject " [RFC] Draft Linux kernel interfaces for ZBC drives".
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Carlos

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [LSF/MM TOPIC] SMR: Disrupting recording technology meriting a new class of storage device
  2014-02-07 13:00 ` Carlos Maiolino
@ 2014-02-07 13:46   ` Hannes Reinecke
  2014-02-07 17:32     ` Jim Malina
  0 siblings, 1 reply; 6+ messages in thread
From: Hannes Reinecke @ 2014-02-07 13:46 UTC (permalink / raw)
  To: Carlos Maiolino, Albert Chen
  Cc: lsf-pc, James Borden, Jim Malina, Curtis Stevens, linux-ide,
	linux-fsdevel, linux-scsi

On 02/07/2014 02:00 PM, Carlos Maiolino wrote:
> Hi,
> 
> On Sat, Feb 01, 2014 at 02:24:33AM +0000, Albert Chen wrote:
>> [LSF/MM TOPIC] SMR: Disrupting recording technology meriting
>> a new class of storage device
>>
>> Shingle Magnetic Recording is a disruptive technology that
>> delivers the next areal density gain for the HDD industry by
>> partially overlapping tracks. Shingling requires physical
>> writes to be sequential, and opens the question of how to
>> address this behavior at a system level. Two general approaches
>> contemplated are to either to do the block management in
>> the device or in the host storage stack/file system through
>> Zone Block Commands (ZBC).
>>
>> The use of ZBC to handle SMR block management yields several
>> benefits such as:
>> - Predictable performance and latency
>> - Faster development time
>> - Access to application and system level semantic information
>> - Scalability / Fewer Drive Resources
>> - Higher reliability
>>
>> Essential to a host managed approach (ZBC) is the openness of
>> Linux and its community is a good place for WD to validate and
>> seek feedback for our thinking - where in the Linux system stack
>> is the best place to add ZBC handling? at the Device Mapper layer?
>> or somewhere else in the storage stack? New ideas and comments
>> are appreciated.
> 
> If you add ZBC handling into the device-mapper layer, aren't you supposing that
> all SMR devices will be managed by device-mapper? This doesn't look right IMHO.
> These devices should be able to be managed via DM or either directly via de
> storage layer. And any other layers making use of these devices (like DM for
> example) should be able to communicate with them and send ZBC commands as
> needed.
> 
Precisely. Adding a new device type (and a new ULD to the SCSI
midlayer) seems to be the right idea here.
Then we could think of how to integrate this into the block layer;
eg we could identify the zones with partitions,
or mirror the zones via block_limits.

There is actually a good chance that we can tweak btrfs to
run unmodified on such a disk; after all, sequential writes
are not a big deal for btrfs. The only issue we might have
is that we might need to re-allocate blocks to free up zones.
But some btrfs developers have assured me this shouldn't be too hard.

Personally I don't like the idea of _having_ to use a device-mapper
module for these things. What I would like is giving the user a
choice; if there are specialized fs around which can deal with such
a disk (hello, ltfs :-) then fine. If not of course we should be
having a device-mapper module to hide the grubby details for
unsuspecting filesystems.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      zSeries & Storage
hare@suse.de			      +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: [LSF/MM TOPIC] SMR: Disrupting recording technology meriting a new class of storage device
  2014-02-07 13:46   ` Hannes Reinecke
@ 2014-02-07 17:32     ` Jim Malina
  2014-02-11 11:57       ` Carlos Maiolino
  0 siblings, 1 reply; 6+ messages in thread
From: Jim Malina @ 2014-02-07 17:32 UTC (permalink / raw)
  To: Hannes Reinecke, Carlos Maiolino, Albert Chen
  Cc: lsf-pc, James Borden, Curtis Stevens, linux-ide, linux-fsdevel,
	linux-scsi



> -----Original Message-----
> From: Hannes Reinecke [mailto:hare@suse.de]
> Sent: Friday, February 07, 2014 5:46 AM
> To: Carlos Maiolino; Albert Chen
> Cc: lsf-pc@lists.linux-foundation.org; James Borden; Jim Malina; Curtis
> Stevens; linux-ide@vger.kernel.org; linux-fsdevel@vger.kernel.org; linux-
> scsi@vger.kernel.org
> Subject: Re: [LSF/MM TOPIC] SMR: Disrupting recording technology meriting
> a new class of storage device
> 
> On 02/07/2014 02:00 PM, Carlos Maiolino wrote:
> > Hi,
> >
> > On Sat, Feb 01, 2014 at 02:24:33AM +0000, Albert Chen wrote:
> >> [LSF/MM TOPIC] SMR: Disrupting recording technology meriting a new
> >> class of storage device
> >>
> >> Shingle Magnetic Recording is a disruptive technology that delivers
> >> the next areal density gain for the HDD industry by partially
> >> overlapping tracks. Shingling requires physical writes to be
> >> sequential, and opens the question of how to address this behavior at
> >> a system level. Two general approaches contemplated are to either to
> >> do the block management in the device or in the host storage
> >> stack/file system through Zone Block Commands (ZBC).
> >>
> >> The use of ZBC to handle SMR block management yields several benefits
> >> such as:
> >> - Predictable performance and latency
> >> - Faster development time
> >> - Access to application and system level semantic information
> >> - Scalability / Fewer Drive Resources
> >> - Higher reliability
> >>
> >> Essential to a host managed approach (ZBC) is the openness of Linux
> >> and its community is a good place for WD to validate and seek
> >> feedback for our thinking - where in the Linux system stack is the
> >> best place to add ZBC handling? at the Device Mapper layer?
> >> or somewhere else in the storage stack? New ideas and comments are
> >> appreciated.
> >
> > If you add ZBC handling into the device-mapper layer, aren't you
> > supposing that all SMR devices will be managed by device-mapper? This
> doesn't look right IMHO.
> > These devices should be able to be managed via DM or either directly
> > via de storage layer. And any other layers making use of these devices
> > (like DM for
> > example) should be able to communicate with them and send ZBC
> commands
> > as needed.
> >

 Clarification:  ZBC is an interface protocol.  A new device and command set.   SMR is a recording technology.  You may have ZBC without SMR or SMR without ZBC.  For examples.  SSD may benefit from ZBC protocol to improve performance and reduce wear.   SMR may be 100% device managed and not provide information required of a ZBC device, like write pointers or zone boundaries.

> Precisely. Adding a new device type (and a new ULD to the SCSI
> midlayer) seems to be the right idea here.
> Then we could think of how to integrate this into the block layer; eg we could
> identify the zones with partitions, or mirror the zones via block_limits.
> 
> There is actually a good chance that we can tweak btrfs to run unmodified on
> such a disk; after all, sequential writes are not a big deal for btrfs. The only
> issue we might have is that we might need to re-allocate blocks to free up
> zones.
> But some btrfs developers have assured me this shouldn't be too hard.
> 
> Personally I don't like the idea of _having_ to use a device-mapper module
> for these things. What I would like is giving the user a choice; if there are
> specialized fs around which can deal with such a disk (hello, ltfs :-) then fine.
> If not of course we should be having a device-mapper module to hide the
> grubby details for unsuspecting filesystems.
> 
> Cheers,
> 
> Hannes
> --
> Dr. Hannes Reinecke		      zSeries & Storage
> hare@suse.de			      +49 911 74053 688
> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
> GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)

jim

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [LSF/MM TOPIC] SMR: Disrupting recording technology meriting a new class of storage device
  2014-02-07 17:32     ` Jim Malina
@ 2014-02-11 11:57       ` Carlos Maiolino
  2014-02-13 22:18         ` [Lsf-pc] " Theodore Ts'o
  0 siblings, 1 reply; 6+ messages in thread
From: Carlos Maiolino @ 2014-02-11 11:57 UTC (permalink / raw)
  To: Jim Malina
  Cc: Hannes Reinecke, Albert Chen, lsf-pc, James Borden,
	Curtis Stevens, linux-ide, linux-fsdevel, linux-scsi

Hi Jim,

On Fri, Feb 07, 2014 at 05:32:44PM +0000, Jim Malina wrote:
> 
> 
> > -----Original Message-----
> > From: Hannes Reinecke [mailto:hare@suse.de]
> > Sent: Friday, February 07, 2014 5:46 AM
> > To: Carlos Maiolino; Albert Chen
> > Cc: lsf-pc@lists.linux-foundation.org; James Borden; Jim Malina; Curtis
> > Stevens; linux-ide@vger.kernel.org; linux-fsdevel@vger.kernel.org; linux-
> > scsi@vger.kernel.org
> > Subject: Re: [LSF/MM TOPIC] SMR: Disrupting recording technology meriting
> > a new class of storage device
> > 
> > On 02/07/2014 02:00 PM, Carlos Maiolino wrote:
> > > Hi,
> > >
> > > On Sat, Feb 01, 2014 at 02:24:33AM +0000, Albert Chen wrote:
> > >> [LSF/MM TOPIC] SMR: Disrupting recording technology meriting a new
> > >> class of storage device
> > >>
> > >> Shingle Magnetic Recording is a disruptive technology that delivers
> > >> the next areal density gain for the HDD industry by partially
> > >> overlapping tracks. Shingling requires physical writes to be
> > >> sequential, and opens the question of how to address this behavior at
> > >> a system level. Two general approaches contemplated are to either to
> > >> do the block management in the device or in the host storage
> > >> stack/file system through Zone Block Commands (ZBC).
> > >>
> > >> The use of ZBC to handle SMR block management yields several benefits
> > >> such as:
> > >> - Predictable performance and latency
> > >> - Faster development time
> > >> - Access to application and system level semantic information
> > >> - Scalability / Fewer Drive Resources
> > >> - Higher reliability
> > >>
> > >> Essential to a host managed approach (ZBC) is the openness of Linux
> > >> and its community is a good place for WD to validate and seek
> > >> feedback for our thinking - where in the Linux system stack is the
> > >> best place to add ZBC handling? at the Device Mapper layer?
> > >> or somewhere else in the storage stack? New ideas and comments are
> > >> appreciated.
> > >
> > > If you add ZBC handling into the device-mapper layer, aren't you
> > > supposing that all SMR devices will be managed by device-mapper? This
> > doesn't look right IMHO.
> > > These devices should be able to be managed via DM or either directly
> > > via de storage layer. And any other layers making use of these devices
> > > (like DM for
> > > example) should be able to communicate with them and send ZBC
> > commands
> > > as needed.
> > >
> 
>  Clarification:  ZBC is an interface protocol.  A new device and command set.   SMR is a recording technology.  You may have ZBC without SMR or SMR without ZBC.  For examples.  SSD may benefit from ZBC protocol to improve performance and reduce wear.   SMR may be 100% device managed and not provide information required of a ZBC device, like write pointers or zone boundaries.
> 

Thanks for clarification, and, this just enforce my concept that ZBC protocol
should be integrated in the generic block layer not make it device-mapper
dependent. So, make this available to any device that supports it with or
without the help of DM.


> > Precisely. Adding a new device type (and a new ULD to the SCSI
> > midlayer) seems to be the right idea here.
> > Then we could think of how to integrate this into the block layer; eg we could
> > identify the zones with partitions, or mirror the zones via block_limits.
> > 
> > There is actually a good chance that we can tweak btrfs to run unmodified on
> > such a disk; after all, sequential writes are not a big deal for btrfs. The only
> > issue we might have is that we might need to re-allocate blocks to free up
> > zones.
> > But some btrfs developers have assured me this shouldn't be too hard.
> > 
> > Personally I don't like the idea of _having_ to use a device-mapper module
> > for these things. What I would like is giving the user a choice; if there are
> > specialized fs around which can deal with such a disk (hello, ltfs :-) then fine.
> > If not of course we should be having a device-mapper module to hide the
> > grubby details for unsuspecting filesystems.
> > 
> > Cheers,
> > 
> > Hannes
> > --
> > Dr. Hannes Reinecke		      zSeries & Storage
> > hare@suse.de			      +49 911 74053 688
> > SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
> > GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)
> 
> jim
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Carlos
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Lsf-pc] [LSF/MM TOPIC] SMR: Disrupting recording technology meriting a new class of storage device
  2014-02-11 11:57       ` Carlos Maiolino
@ 2014-02-13 22:18         ` Theodore Ts'o
  0 siblings, 0 replies; 6+ messages in thread
From: Theodore Ts'o @ 2014-02-13 22:18 UTC (permalink / raw)
  To: Carlos Maiolino
  Cc: Jim Malina, linux-scsi, Albert Chen, linux-ide, Hannes Reinecke,
	linux-fsdevel, lsf-pc, James Borden, Curtis Stevens

On Tue, Feb 11, 2014 at 09:57:40AM -0200, Carlos Maiolino wrote:
> 
> Thanks for clarification, and, this just enforce my concept that ZBC protocol
> should be integrated in the generic block layer not make it device-mapper
> dependent. So, make this available to any device that supports it with or
> without the help of DM.

The kernel interface which I have proposed[1] on the linux-fsdevel
list is indeed something that would be integrated in the generic block
device layer.  My hope is that in the near future (I'm waiting for ZBC
prototypes to show up in Mountain View ;-) we will have a patches that
will allow Linux to recognize ZBC drives, and export the ZBC
functionality via the this interface.

[1] http://thread.gmane.org/gmane.linux.file-systems/81970/focus=82309

I also hope that in the near-term future we will have at least one
device-mapper "SMR simulator" which take a standard block device, and
add write-pointer tracking, and then export the same interface.  This
would allow file systems (perhaps btrfs) to experiment with working
with SMR drives, without having to wait for getting a hold of one of
the ZBC drives.  It would also allow people who are interested in
writing a device mapper shim layer which takes a SMR drive, and
exports a host-assisted block device.

Of course, the stacking of the device mapper layers:

	HDD <-> SMR_SIMULATOR <-> SMR_ADAPTER 

is basically a no-op except for introducing performance delays.  But
the idea here is not performance, but allow people to debug their
code.  So the use cases:

	HDD <-> SMR_SIMULATOR <-> SMR_ADAPTER <-> stock linux file system
	HDD <-> SMR_SIMULATOR <-> SMR_ADAPTER <-> ext4 modified to be SMR-friendly
	HDD <-> SMR_SIMULATOR <-> modified btrfs that supports ZBC

would eventually become:

	SMR HDD <-> SMR_ADAPTER <-> stock linux file system
	SMR_HDD <-> SMR_ADAPTER <-> ext4 modified to be SMR-friendly
	SMR_HDD <-> modified btrfs that supports ZBC

And while we wait for SMR_HDD's to become generally available to all
kernel developers, the existence of the device-mapper smr simulator
will enable us to start work on the device-mapper smr adapter, and
file systems that are either modified to be SMR-friendly, or modified
to work directly w/o any adapter layers with SMR drives.

Regards,

						- Ted

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-02-13 22:18 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-02-01  2:24 [LSF/MM TOPIC] SMR: Disrupting recording technology meriting a new class of storage device Albert Chen
2014-02-07 13:00 ` Carlos Maiolino
2014-02-07 13:46   ` Hannes Reinecke
2014-02-07 17:32     ` Jim Malina
2014-02-11 11:57       ` Carlos Maiolino
2014-02-13 22:18         ` [Lsf-pc] " Theodore Ts'o

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.