All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Matias Bjørling" <m@bjorling.me>
To: Viacheslav Dubeyko <slava@dubeyko.com>,
	lsf-pc@lists.linux-foundation.org
Cc: Linux FS Devel <linux-fsdevel@vger.kernel.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	linux-nvme@lists.infradead.org, Vyacheslav.Dubeyko@wdc.com
Subject: Re: [LSF/MM TOPIC][LSF/MM ATTEND] OCSSDs - SMR, Hierarchical Interface, and Vector I/Os
Date: Tue, 3 Jan 2017 20:10:38 +0100	[thread overview]
Message-ID: <9319ce16-8355-3560-95b6-45e3f07220de@bjorling.me> (raw)
In-Reply-To: <1483464921.2440.19.camel@dubeyko.com>

On 01/03/2017 06:35 PM, Viacheslav Dubeyko wrote:
> Hi Matias,
>
> On Tue, 2017-01-03 at 09:56 +0100, Matias Bjørling wrote:
>> On 01/03/2017 12:12 AM, Viacheslav Dubeyko wrote:
>>>
>>> On Mon, 2017-01-02 at 22:06 +0100, Matias Bjørling wrote:
>>>>
>>>> Hi,
>>>>
>>>> The open-channel SSD subsystem is maturing, and drives are
>>>> beginning
>>>> to
>>>> become available on the market.
>>> What do you mean? We still have nothing on the market. I haven't
>>> opportunity to access to any of such device. Could you share your
>>> knowledge where and what device can be bought on the market?
>>>
>> Hi Vyacheslav,
>>
>> You are right that they are not available off the shelf at a
>> convenient
>> store. You may contact one of these vendors for availability: CNEX
>> Labs
>> (Westlake LightNVM SDK), Radian Memory Systems (RMS-325), and/or EMC
>> (OX
>> Controller + Dragon Fire card).
>
> We, Western Digital, contacted with CNEX Labs about a half year ago.
> Our request was refused. Also we contacted with Radian Memory Systems
> about a year ago. Our negotiations finished with no sucess at all. And
> I doubt that EMC will share with us something. So, such situation looks
> really weird, especially for the case of open-source community. We
> cannot access or test any Open-channel SSD nor for money nor under NDA.
> Usually, open-source means that everybody has access to hardware and we
> can discuss implementation, architecture or approach without any
> restrictions. But we haven't access to hardware right now. I understand
> the business model and blah, blah, blah. But it looks like that,
> finally, we have nothing like Open-channel SSD on the market, from my
> personal point of view. And I suppose that it's really tricky way to
> discuss software interface or any other details about something that
> doesn't exist at all. Because if I cannot take and test some hardware
> then I cannot build my own opinion about this technology.
>

I understand your frustration. It is annoying not having easy access to 
hardware. As you properly are aware, similarly with host-managed SMR 
drives, there are customers that use your drives, while not being 
available off-the-shelf.

All of the open-channel SSD work is done in the open. Patches, new 
targets, and so forth are being developed for everyone to see. 
Similarly, the NVMe host interface is developed in the open as well. The 
interface allows one to implements supporting firmware. The "front-end" 
of the FTL on the SSD, is removed, and the "back-end" engine is exposed. 
It is not much work and given HGST already have an SSD firmware 
implementation. I bet you guys can whip up an internal implementation in 
a matter of weeks. If you choose to do so, I will bend over backwards to 
help you sort out any quirks that might be.

Another option is to use the qemu extension. We are improving it 
continuously to make sure it follows the implementation of a real 
hardware OCSSDs. Today we do 90% of our FTL work using qemu, and most of 
the time it just works when we run the FTL code on real hardware.

Similarly to vendors that provide new CPUs, NVDIMMs, and graphic 
drivers. Some code and refactoring go in years in advance. What I am 
proposing here is to discuss how OCSSDs fits into the storage stack, and 
what we can do to improve it. Optimally, most of the lightnvm subsystem 
can be removed by exposing vectored I/Os. Which then enables 
implementation of a target to be a traditional device mapper module. 
That would be great!

WARNING: multiple messages have this Message-ID (diff)
From: m@bjorling.me (Matias Bjørling)
Subject: [LSF/MM TOPIC][LSF/MM ATTEND] OCSSDs - SMR, Hierarchical Interface, and Vector I/Os
Date: Tue, 3 Jan 2017 20:10:38 +0100	[thread overview]
Message-ID: <9319ce16-8355-3560-95b6-45e3f07220de@bjorling.me> (raw)
In-Reply-To: <1483464921.2440.19.camel@dubeyko.com>

On 01/03/2017 06:35 PM, Viacheslav Dubeyko wrote:
> Hi Matias,
>
> On Tue, 2017-01-03@09:56 +0100, Matias Bj?rling wrote:
>> On 01/03/2017 12:12 AM, Viacheslav Dubeyko wrote:
>>>
>>> On Mon, 2017-01-02@22:06 +0100, Matias Bj?rling wrote:
>>>>
>>>> Hi,
>>>>
>>>> The open-channel SSD subsystem is maturing, and drives are
>>>> beginning
>>>> to
>>>> become available on the market.
>>> What do you mean? We still have nothing on the market. I haven't
>>> opportunity to access to any of such device. Could you share your
>>> knowledge where and what device can be bought on the market?
>>>
>> Hi Vyacheslav,
>>
>> You are right that they are not available off the shelf at a
>> convenient
>> store. You may contact one of these vendors for availability: CNEX
>> Labs
>> (Westlake LightNVM SDK), Radian Memory Systems (RMS-325), and/or EMC
>> (OX
>> Controller + Dragon Fire card).
>
> We, Western Digital, contacted with CNEX Labs about a half year ago.
> Our request was refused. Also we contacted with Radian Memory Systems
> about a year ago. Our negotiations finished with no sucess at all. And
> I doubt that EMC will share with us something. So, such situation looks
> really weird, especially for the case of open-source community. We
> cannot access or test any Open-channel SSD nor for money nor under NDA.
> Usually, open-source means that everybody has access to hardware and we
> can discuss implementation, architecture or approach without any
> restrictions. But we haven't access to hardware right now. I understand
> the business model and blah, blah, blah. But it looks like that,
> finally, we have nothing like Open-channel SSD on the market, from my
> personal point of view. And I suppose that it's really tricky way to
> discuss software interface or any other details about something that
> doesn't exist at all. Because if I cannot take and test some hardware
> then I cannot build my own opinion about this technology.
>

I understand your frustration. It is annoying not having easy access to 
hardware. As you properly are aware, similarly with host-managed SMR 
drives, there are customers that use your drives, while not being 
available off-the-shelf.

All of the open-channel SSD work is done in the open. Patches, new 
targets, and so forth are being developed for everyone to see. 
Similarly, the NVMe host interface is developed in the open as well. The 
interface allows one to implements supporting firmware. The "front-end" 
of the FTL on the SSD, is removed, and the "back-end" engine is exposed. 
It is not much work and given HGST already have an SSD firmware 
implementation. I bet you guys can whip up an internal implementation in 
a matter of weeks. If you choose to do so, I will bend over backwards to 
help you sort out any quirks that might be.

Another option is to use the qemu extension. We are improving it 
continuously to make sure it follows the implementation of a real 
hardware OCSSDs. Today we do 90% of our FTL work using qemu, and most of 
the time it just works when we run the FTL code on real hardware.

Similarly to vendors that provide new CPUs, NVDIMMs, and graphic 
drivers. Some code and refactoring go in years in advance. What I am 
proposing here is to discuss how OCSSDs fits into the storage stack, and 
what we can do to improve it. Optimally, most of the lightnvm subsystem 
can be removed by exposing vectored I/Os. Which then enables 
implementation of a target to be a traditional device mapper module. 
That would be great!

  reply	other threads:[~2017-01-03 19:11 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-02 21:06 [LSF/MM TOPIC][LSF/MM ATTEND] OCSSDs - SMR, Hierarchical Interface, and Vector I/Os Matias Bjørling
2017-01-02 21:06 ` Matias Bjørling
2017-01-02 21:06 ` Matias Bjørling
2017-01-02 23:12 ` Viacheslav Dubeyko
2017-01-02 23:12   ` Viacheslav Dubeyko
2017-01-02 23:12   ` Viacheslav Dubeyko
2017-01-03  8:56   ` Matias Bjørling
2017-01-03  8:56     ` Matias Bjørling
2017-01-03 17:35     ` Viacheslav Dubeyko
2017-01-03 17:35       ` Viacheslav Dubeyko
2017-01-03 17:35       ` Viacheslav Dubeyko
2017-01-03 19:10       ` Matias Bjørling [this message]
2017-01-03 19:10         ` Matias Bjørling
2017-01-04  2:59         ` Slava Dubeyko
2017-01-04  2:59           ` Slava Dubeyko
2017-01-04  2:59           ` Slava Dubeyko
2017-01-04  7:24           ` Damien Le Moal
2017-01-04  7:24             ` Damien Le Moal
2017-01-04 12:39             ` Matias Bjørling
2017-01-04 12:39               ` Matias Bjørling
2017-01-04 16:57             ` Theodore Ts'o
2017-01-04 16:57               ` Theodore Ts'o
2017-01-10  1:42               ` Damien Le Moal
2017-01-10  1:42                 ` Damien Le Moal
2017-01-10  4:24                 ` Theodore Ts'o
2017-01-10  4:24                   ` Theodore Ts'o
2017-01-10 13:06                   ` Matias Bjorling
2017-01-10 13:06                     ` Matias Bjorling
2017-01-11  4:07                     ` Damien Le Moal
2017-01-11  4:07                       ` Damien Le Moal
2017-01-11  6:06                       ` Matias Bjorling
2017-01-11  6:06                         ` Matias Bjorling
2017-01-11  7:49                       ` Hannes Reinecke
2017-01-11  7:49                         ` Hannes Reinecke
2017-01-05 22:58             ` Slava Dubeyko
2017-01-05 22:58               ` Slava Dubeyko
2017-01-05 22:58               ` Slava Dubeyko
2017-01-06  1:11               ` Theodore Ts'o
2017-01-06  1:11                 ` Theodore Ts'o
2017-01-06 12:51                 ` Matias Bjørling
2017-01-06 12:51                   ` Matias Bjørling
2017-01-06 12:51                   ` Matias Bjørling
2017-01-09  6:49                 ` Slava Dubeyko
2017-01-09  6:49                   ` Slava Dubeyko
2017-01-09  6:49                   ` Slava Dubeyko
2017-01-09 14:55                   ` Theodore Ts'o
2017-01-09 14:55                     ` Theodore Ts'o
2017-01-09 14:55                     ` Theodore Ts'o
2017-01-06 13:05               ` Matias Bjørling
2017-01-06 13:05                 ` Matias Bjørling
2017-01-06 13:05                 ` Matias Bjørling
2017-01-06  1:09             ` Jaegeuk Kim
2017-01-06  1:09               ` Jaegeuk Kim
2017-01-06 12:55               ` Matias Bjørling
2017-01-06 12:55                 ` Matias Bjørling
2017-01-06 12:55                 ` Matias Bjørling
2017-01-12  1:33 ` [LSF/MM " Damien Le Moal
2017-01-12  2:18   ` [Lsf-pc] " James Bottomley
2017-01-12  2:18     ` James Bottomley
2017-01-12  2:35     ` Damien Le Moal
2017-01-12  2:35       ` Damien Le Moal
2017-01-12  2:38       ` James Bottomley
2017-01-12  2:38         ` James Bottomley

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9319ce16-8355-3560-95b6-45e3f07220de@bjorling.me \
    --to=m@bjorling.me \
    --cc=Vyacheslav.Dubeyko@wdc.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=slava@dubeyko.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.