linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH v3 00/20] sg: add v4 interface
       [not found] <20190807114252.2565-1-dgilbert@interlog.com>
@ 2019-08-08 19:10 ` James Bottomley
  2019-08-08 21:08   ` Douglas Gilbert
  2019-08-12 15:23   ` Christoph Hellwig
  0 siblings, 2 replies; 13+ messages in thread
From: James Bottomley @ 2019-08-08 19:10 UTC (permalink / raw)
  To: Douglas Gilbert, linux-scsi, linux-block, linux-api
  Cc: martin.petersen, hare, bvanassche

On Wed, 2019-08-07 at 13:42 +0200, Douglas Gilbert wrote:
> This patchset extends the SCSI generic (sg) driver found in
> lk 5.3 .  The sg driver has a version number which is visible
> via ioctl(SG_GET_VERSION_NUM) and is bumped from 3.5.36 to
> 4.0.03 by this patchset. The additions and changes are
> described in some detail in this long webpage:
>     http://sg.danny.cz/sg/sg_v40.html
> 
> Most new features described in the above webpage are not
> implemented in this patchset.

Since this will be an extension of something that exists both in your
sg driver and in the block bsg interface (and thus needs an
implementation there), I added both linux-block and linux-api to the cc
(the latter because you're adding to an API).

Simply extending sg to use the v4 header protocol in uapi/linux/bsg.h
is fine modulo the code being in the right form.  The problems are the
new ioctls you want to add that would need to be present there as well.
 The specific question being how we support async or non-blocking I/O
on the sg and bsg interfaces.  The standard way we add asynchronous I/O
is supposed to be via .poll on the file descriptor.  you already use
read and write in sg and bsg doesn't have a polling interface, but it
looks like we could use MSG to signal an ioctl is ready to be serviced
for both.  Would shifting to a non-blocking poll based interface for
ioctls remove the need to add these SG_IOSUBMIT/SG_IORECEIVE ioctls
since we could now do everything over blocking or non-blocking SG_IO?

James




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 00/20] sg: add v4 interface
  2019-08-08 19:10 ` [PATCH v3 00/20] sg: add v4 interface James Bottomley
@ 2019-08-08 21:08   ` Douglas Gilbert
  2019-08-08 21:37     ` Tony Battersby
  2019-08-08 23:00     ` James Bottomley
  2019-08-12 15:23   ` Christoph Hellwig
  1 sibling, 2 replies; 13+ messages in thread
From: Douglas Gilbert @ 2019-08-08 21:08 UTC (permalink / raw)
  To: James Bottomley, linux-scsi, linux-block, linux-api
  Cc: martin.petersen, hare, bvanassche, Arnd Bergmann, Tony Battersby

On 2019-08-08 9:10 p.m., James Bottomley wrote:
> On Wed, 2019-08-07 at 13:42 +0200, Douglas Gilbert wrote:
>> This patchset extends the SCSI generic (sg) driver found in
>> lk 5.3 .  The sg driver has a version number which is visible
>> via ioctl(SG_GET_VERSION_NUM) and is bumped from 3.5.36 to
>> 4.0.03 by this patchset. The additions and changes are
>> described in some detail in this long webpage:
>>      http://sg.danny.cz/sg/sg_v40.html
>>
>> Most new features described in the above webpage are not
>> implemented in this patchset.
> 
> Since this will be an extension of something that exists both in your
> sg driver and in the block bsg interface (and thus needs an
> implementation there), I added both linux-block and linux-api to the cc
> (the latter because you're adding to an API).

The SG_IO ioctl has been the synchronous SCSI pass-through interface
for over 15 years. Its quirk is that it takes two different formats
depending on the character device its file descriptor belongs to:
    - sg device file descriptor: sg v3 interface (struct sg_io_hdr)
    - bsg device file descriptor: sg v4 interface (struct sg_io_v4)

I'm only proposing one change in the synchronous interface based
on the SG_IO ioctl: to additionally accept the sg v4 interface in
the sg driver.

Arnd Bergmann considers two interface structures through one ioctl
as undesirable but is prepared to accept SG_IO. His POV is as the
maintainer of 32/64 bit compatibility ioctls. The case of SG_IO
is a well established exception to his rules (i.e. a known evil).

I don't believe extending ioctl SG_IO for asynchronous work is a
good idea. As pointed out above, it is already overloaded too
much. Additionally it would need further flags to differentiate
these cases:
    - sync/async
    - if async: submission or reception
    - and optionally if async: abort (an inflight request)
    - and do you want to add multiple requests in there too?

So are you looking at reducing the number of ioctl to the absolute
minimum? If so I don't think the SG_IO ioctl is the correct vehicle
for that. It doesn't use the _IOR(W) macros, instead it is hard
coded at 0x2285 ***. And the size checking offered by the _IOR(W)
macro (on the object pointed to by the 3rd argument) is useless with
SG_IO because it takes two different sized objects. Worse, one of
those objects changes size between 32 and 64 bits, while the other
does not.


Stepping back, this started about 18 months ago when security janitors
got upset about the bsg driver's use of write()/read() for its async
interface. Linus Torvalds suggested SG_IOSUBMIT and SG_IORECEIVE to
replace write() and read() respectively in bsg. Instead the write()/
read() interface was removed from the bsg driver. With it went the
ability to submit multiple requests in one write() call (by passing
an array of sg_io_v4 objects rather than just one). My intention is
to re-add that capability in the sg driver, using the ioctls that
Linus suggested.

Initially I had both the sg v3 and v4 interfaces passing through the
two ioctls. Arnd Bergmann preferred that a separate pair of ioctls
be used for each interface. Hence SG_IOSUBMIT_V3 and SG_IORECEIVE_V3
were added for the v3 interface. And thus SG_IOSUBMIT and SG_IORECEIVE
only use the v4 interface. This cleaned up my code and documentation.
As a bonus, all four ioctls use the _IORW macros and can check the
fixed size of the third argument to each ioctl invocation.


> Simply extending sg to use the v4 header protocol in uapi/linux/bsg.h
> is fine modulo the code being in the right form.  The problems are the
> new ioctls you want to add that would need to be present there as well.

Why? The bsg driver has never accepted the sg v3 interface. It has also
removed functionality that I'm trying to re-add in this, and a follow-on
patchset. The bsg driver has reduced its functionality as a generalized
SCSI pass-through, but it has more than enough other roles to justify
its existence, for example as a SMP (SAS) pass-through and a
driver/transport specific pass-through to various LLDs. I don't see the
need for the sg and bsg driver to move forward in lockstep.

>   The specific question being how we support async or non-blocking I/O
> on the sg and bsg interfaces.  The standard way we add asynchronous I/O
> is supposed to be via .poll on the file descriptor.  you already use
> read and write in sg and bsg doesn't have a polling interface,

It is hard to parse that last sentence; the sg driver has always
supported select/poll/epoll (and SIGIO plus RT signals). sg_poll()
will work just as well irrespective of whether a request is submitted
by write(), ioctl(SG_IOSUBMIT) or ioctl(SG_IOSUBMIT_V3).

  but it
> looks like we could use MSG to signal an ioctl is ready to be serviced
> for both.  Would shifting to a non-blocking poll based interface for
> ioctls remove the need to add these SG_IOSUBMIT/SG_IORECEIVE ioctls
> since we could now do everything over blocking or non-blocking SG_IO?

Not sure what the MSG is you refer to. The sg driver has had
ioctl(SG_GET_NUM_WAITING) for a long time. And it is made even
more lightweight in this patchset: it takes no locks, just reads one
atomic (a counter obviously) and returns. My guess is its faster the
select()/poll()/epoll() but doesn't have the ability to monitor
multiple file descriptors.

Here is the full set of extra ioctls I have, or will be proposing:
    SG_IOSUBMIT
    SG_IOSUBMIT_V3
    SG_IORECEIVE
    SG_IORECEIVE_V3
    SG_IOABORT
    SG_SG_SET_GET_EXTENDED

They are all new style ioctls using the _IOR(W) macros with fixed size
objects referenced by the ioctl's third argument. ioctls have been
referred to as the "garbage bin of Unix". Well that last one is a garbage
bin within a garbage bin :-) On the plus side, it keeps that list
relatively short.

Doug Gilbert


*** Tony Battersby is a sg driver power user. He has lamented wading through
     very large logs looking for some hint of why the sg driver is playing
     up. He has stated the strong preference for more, not less, ioctls.

BTW the write()/read() interface still remains in the sg driver after
these patchsets. It will continue to only support the sg v3 interface.
Perhaps calling it should cause a "deprecated" log message once for
each kernel run to annoy maintainers of old code.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 00/20] sg: add v4 interface
  2019-08-08 21:08   ` Douglas Gilbert
@ 2019-08-08 21:37     ` Tony Battersby
  2019-08-08 22:25       ` Bart Van Assche
  2019-08-08 23:00     ` James Bottomley
  1 sibling, 1 reply; 13+ messages in thread
From: Tony Battersby @ 2019-08-08 21:37 UTC (permalink / raw)
  To: dgilbert, James Bottomley, linux-scsi, linux-block, linux-api
  Cc: martin.petersen, hare, bvanassche, Arnd Bergmann

On 8/8/19 5:08 PM, Douglas Gilbert wrote:
>
> Here is the full set of extra ioctls I have, or will be proposing:
>     SG_IOSUBMIT
>     SG_IOSUBMIT_V3
>     SG_IORECEIVE
>     SG_IORECEIVE_V3
>     SG_IOABORT
>     SG_SG_SET_GET_EXTENDED
>
> They are all new style ioctls using the _IOR(W) macros with fixed size
> objects referenced by the ioctl's third argument. ioctls have been
> referred to as the "garbage bin of Unix". Well that last one is a garbage
> bin within a garbage bin :-) On the plus side, it keeps that list
> relatively short.
>
> Doug Gilbert
>
>
> *** Tony Battersby is a sg driver power user. He has lamented wading through
>      very large logs looking for some hint of why the sg driver is playing
>      up. He has stated the strong preference for more, not less, ioctls.
>
One of the reasons ioctls have a bad reputation is because they can be
used to implement poorly-thought-out interfaces.  So kernel maintainers
push back on adding new ioctls.  But the push back isn't about the
number of ioctls, it is about the poor interfaces.  My advice was that
in general, to implement a given API, it would be better to add more
ioctls with a simple interface for each one rather than to add fewer
extremely complex multiplexing ioctls.

Tony Battersby
Cybernetics


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 00/20] sg: add v4 interface
  2019-08-08 21:37     ` Tony Battersby
@ 2019-08-08 22:25       ` Bart Van Assche
  2019-08-09 13:28         ` Tony Battersby
  0 siblings, 1 reply; 13+ messages in thread
From: Bart Van Assche @ 2019-08-08 22:25 UTC (permalink / raw)
  To: Tony Battersby, dgilbert, James Bottomley, linux-scsi,
	linux-block, linux-api
  Cc: martin.petersen, hare, Arnd Bergmann

On 8/8/19 2:37 PM, Tony Battersby wrote:
> On 8/8/19 5:08 PM, Douglas Gilbert wrote:
>> *** Tony Battersby is a sg driver power user. He has lamented wading through
>>       very large logs looking for some hint of why the sg driver is playing
>>       up. He has stated the strong preference for more, not less, ioctls.
>>
> One of the reasons ioctls have a bad reputation is because they can be
> used to implement poorly-thought-out interfaces.  So kernel maintainers
> push back on adding new ioctls.  But the push back isn't about the
> number of ioctls, it is about the poor interfaces.  My advice was that
> in general, to implement a given API, it would be better to add more
> ioctls with a simple interface for each one rather than to add fewer
> extremely complex multiplexing ioctls.

Hi Tony,

What is your motivation to use the SG_IO API? Is it controlling SMR 
drives or are you using SG_IO for another reason? I'm asking because 
depending on the use case there may be a better solution than using the 
SG_IO API.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 00/20] sg: add v4 interface
  2019-08-08 21:08   ` Douglas Gilbert
  2019-08-08 21:37     ` Tony Battersby
@ 2019-08-08 23:00     ` James Bottomley
  2019-08-14  4:19       ` Douglas Gilbert
  1 sibling, 1 reply; 13+ messages in thread
From: James Bottomley @ 2019-08-08 23:00 UTC (permalink / raw)
  To: dgilbert, linux-scsi, linux-block, linux-api
  Cc: martin.petersen, hare, bvanassche, Arnd Bergmann, Tony Battersby

On Thu, 2019-08-08 at 23:08 +0200, Douglas Gilbert wrote:
> On 2019-08-08 9:10 p.m., James Bottomley wrote:
> > On Wed, 2019-08-07 at 13:42 +0200, Douglas Gilbert wrote:
> > > This patchset extends the SCSI generic (sg) driver found in
> > > lk 5.3 .  The sg driver has a version number which is visible
> > > via ioctl(SG_GET_VERSION_NUM) and is bumped from 3.5.36 to
> > > 4.0.03 by this patchset. The additions and changes are
> > > described in some detail in this long webpage:
> > >      http://sg.danny.cz/sg/sg_v40.html
> > > 
> > > Most new features described in the above webpage are not
> > > implemented in this patchset.
> > 
> > Since this will be an extension of something that exists both in
> > your sg driver and in the block bsg interface (and thus needs an
> > implementation there), I added both linux-block and linux-api to
> > the cc (the latter because you're adding to an API).
> 
> The SG_IO ioctl has been the synchronous SCSI pass-through interface
> for over 15 years. Its quirk is that it takes two different formats
> depending on the character device its file descriptor belongs to:
>     - sg device file descriptor: sg v3 interface (struct sg_io_hdr)
>     - bsg device file descriptor: sg v4 interface (struct sg_io_v4)
> 
> I'm only proposing one change in the synchronous interface based
> on the SG_IO ioctl: to additionally accept the sg v4 interface in
> the sg driver.

Right, as I said below, adding v4 to sg looks to be fine.  We don't
need v3 in BSG since v4 is a superset.

> Arnd Bergmann considers two interface structures through one ioctl
> as undesirable but is prepared to accept SG_IO. His POV is as the
> maintainer of 32/64 bit compatibility ioctls. The case of SG_IO
> is a well established exception to his rules (i.e. a known evil).
> 
> I don't believe extending ioctl SG_IO for asynchronous work is a
> good idea. As pointed out above, it is already overloaded too
> much. Additionally it would need further flags to differentiate
> these cases:
>     - sync/async
>     - if async: submission or reception
>     - and optionally if async: abort (an inflight request)
>     - and do you want to add multiple requests in there too?
> 
> So are you looking at reducing the number of ioctl to the absolute
> minimum? If so I don't think the SG_IO ioctl is the correct vehicle
> for that. It doesn't use the _IOR(W) macros, instead it is hard
> coded at 0x2285 ***. And the size checking offered by the _IOR(W)
> macro (on the object pointed to by the 3rd argument) is useless with
> SG_IO because it takes two different sized objects. Worse, one of
> those objects changes size between 32 and 64 bits, while the other
> does not.

OK, so this is where interface design is important.  It's perfectly
possible to design an async interface using the current v4 BSG SG_IO. 
You simply open the device O_NONBLOCK and send the ioctl which will
return immediately.  To find out when it returns you poll (we'd
obviously need to add polling in bsg, but that's fairly simple).  Since
 poll/select/epoll is our standard async event handling mechanism, we
should at least try it and have a good reason why it doesn't work
before inventing our own equivalent.

> Stepping back, this started about 18 months ago when security
> janitors got upset about the bsg driver's use of write()/read() for
> its async interface. Linus Torvalds suggested SG_IOSUBMIT and
> SG_IORECEIVE to replace write() and read() respectively in bsg.
> Instead the write()/read() interface was removed from the bsg driver.
> With it went the ability to submit multiple requests in one write()
> call (by passing an array of sg_io_v4 objects rather than just one).
> My intention is to re-add that capability in the sg driver, using the
> ioctls that Linus suggested.

What I'm saying is let's discuss the interface design before we add it.
 the submit/receive interface is essentially a packet one.  The problem
with a packet one is that we're request/response where the response
likely altered some user memory area, so each response has to be
matched with the outbound request to find out what happened ... how is
that done?

One way to do it simply with existing SG_IO is to allow only one
outstanding request per FD, so if you want five requests, you open five
FDs and then poll all of them to see which one returns.  I'm not saying
it's the best because FDs are a somewhat precious commodity, but it is
possible and it's a simple extension of what exists today.

> Initially I had both the sg v3 and v4 interfaces passing through the
> two ioctls. Arnd Bergmann preferred that a separate pair of ioctls
> be used for each interface. Hence SG_IOSUBMIT_V3 and SG_IORECEIVE_V3
> were added for the v3 interface. And thus SG_IOSUBMIT and
> SG_IORECEIVE only use the v4 interface. This cleaned up my code and
> documentation. As a bonus, all four ioctls use the _IORW macros and
> can check the fixed size of the third argument to each ioctl
> invocation.

Can we talk first about how the interface should work.  We can get into
assigning ioctls and what extensions are needed later.

> > Simply extending sg to use the v4 header protocol in
> > uapi/linux/bsg.h is fine modulo the code being in the right
> > form.  The problems are the new ioctls you want to add that would
> > need to be present there as well.
> 
> Why? The bsg driver has never accepted the sg v3 interface. It has
> also removed functionality that I'm trying to re-add in this, and a
> follow-on patchset. The bsg driver has reduced its functionality as a
> generalized SCSI pass-through, but it has more than enough other
> roles to justify its existence, for example as a SMP (SAS) pass-
> through and a driver/transport specific pass-through to various LLDs.
> I don't see the need for the sg and bsg driver to move forward in
> lockstep.

You quote Linus a lot above, but he also said "I wonder if we could at
least try to unify the bsg/sg code - possibly by making sg use the
prettier bsg code (but definitely have to add all the security
measures)".  It's in the interests of unity that we need to make the
code paths look the same as possible, so eventually one could call the
other.  ideally sg would call bsg for v4 and we'd add async to bsg so
it would work for both.

> >   The specific question being how we support async or non-blocking
> > I/O on the sg and bsg interfaces.  The standard way we add
> > asynchronous I/O is supposed to be via .poll on the file
> > descriptor.  you already use read and write in sg and bsg doesn't
> > have a polling interface,
> 
> It is hard to parse that last sentence; the sg driver has always
> supported select/poll/epoll (and SIGIO plus RT signals). sg_poll()
> will work just as well irrespective of whether a request is submitted
> by write(), ioctl(SG_IOSUBMIT) or ioctl(SG_IOSUBMIT_V3).

The way I read how the interface works, it only works when the fd is
ready for read (i.e. a packet submitted with read has returned).  We
need to extend it to make it work with ioctl.  Since ioctl returns
isn't either a 'ready to read' or 'ready to write' event on the fd I
was suggesting using 'msg ready' event for it.

> > but it looks like we could use MSG to signal an ioctl is ready to
> > be serviced for both.  Would shifting to a non-blocking poll based
> > interface for ioctls remove the need to add these
> > SG_IOSUBMIT/SG_IORECEIVE ioctls since we could now do everything
> > over blocking or non-blocking SG_IO?
> 
> Not sure what the MSG is you refer to. The sg driver has had
> ioctl(SG_GET_NUM_WAITING) for a long time. And it is made even
> more lightweight in this patchset: it takes no locks, just reads one
> atomic (a counter obviously) and returns. My guess is its faster the
> select()/poll()/epoll() but doesn't have the ability to monitor
> multiple file descriptors.

the poll interfaces don't tell you how many outstanding request you
have, they tell you when some event happened on the fd.  The event
we're looking for with async packets is an indication of what
completed.

> Here is the full set of extra ioctls I have, or will be proposing:
>     SG_IOSUBMIT
>     SG_IOSUBMIT_V3
>     SG_IORECEIVE
>     SG_IORECEIVE_V3
>     SG_IOABORT
>     SG_SG_SET_GET_EXTENDED

Well, i/o cancellation is another huge can of worms, but let's get the
submit interface sorted out first before worrying about how
cancellation works.

> They are all new style ioctls using the _IOR(W) macros with fixed
> size objects referenced by the ioctl's third argument. ioctls have
> been referred to as the "garbage bin of Unix". Well that last one is
> a garbage bin within a garbage bin :-) On the plus side, it keeps
> that list relatively short.
> 
> Doug Gilbert
> 
> 
> *** Tony Battersby is a sg driver power user. He has lamented wading
> through very large logs looking for some hint of why the sg driver is
> playing up he has stated the strong preference for more, not less,
> ioctls.

I'm not really bothered about the number of ioctls; I'm bothered about
getting the interface right.

> BTW the write()/read() interface still remains in the sg driver after
> these patchsets. It will continue to only support the sg v3
> interface. Perhaps calling it should cause a "deprecated" log message
> once for each kernel run to annoy maintainers of old code.

That would be ideal given all the security complaints we have about it.

James


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 00/20] sg: add v4 interface
  2019-08-08 22:25       ` Bart Van Assche
@ 2019-08-09 13:28         ` Tony Battersby
  0 siblings, 0 replies; 13+ messages in thread
From: Tony Battersby @ 2019-08-09 13:28 UTC (permalink / raw)
  To: Bart Van Assche, dgilbert, James Bottomley, linux-scsi,
	linux-block, linux-api
  Cc: martin.petersen, hare, Arnd Bergmann

On 8/8/19 6:25 PM, Bart Van Assche wrote:
> On 8/8/19 2:37 PM, Tony Battersby wrote:
>> On 8/8/19 5:08 PM, Douglas Gilbert wrote:
>>> *** Tony Battersby is a sg driver power user. He has lamented wading through
>>>       very large logs looking for some hint of why the sg driver is playing
>>>       up. He has stated the strong preference for more, not less, ioctls.
>>>
>> One of the reasons ioctls have a bad reputation is because they can be
>> used to implement poorly-thought-out interfaces.  So kernel maintainers
>> push back on adding new ioctls.  But the push back isn't about the
>> number of ioctls, it is about the poor interfaces.  My advice was that
>> in general, to implement a given API, it would be better to add more
>> ioctls with a simple interface for each one rather than to add fewer
>> extremely complex multiplexing ioctls.
> Hi Tony,
>
> What is your motivation to use the SG_IO API? Is it controlling SMR 
> drives or are you using SG_IO for another reason? I'm asking because 
> depending on the use case there may be a better solution than using the 
> SG_IO API.
>
> Thanks,
>
> Bart.
>
Actually I used the asynchronous write()/read()/poll() sg interface to
implement RAID-like functionality for tape drives and medium changers,
in a commercial product that has been around since 2002.  These days our
products use a lot more disk I/O than tape I/O, so I don't write much
new code using the sg interface anymore, although that code is still
there and has to be maintained as needed.  So I don't have any immediate
plans to use any of the new sgv4 features being introduced, and
unfortunately I am way too busy to even give it a good review...

Tony Battersby
Cybernetics


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 00/20] sg: add v4 interface
  2019-08-08 19:10 ` [PATCH v3 00/20] sg: add v4 interface James Bottomley
  2019-08-08 21:08   ` Douglas Gilbert
@ 2019-08-12 15:23   ` Christoph Hellwig
  1 sibling, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2019-08-12 15:23 UTC (permalink / raw)
  To: James Bottomley
  Cc: Douglas Gilbert, linux-scsi, linux-block, linux-api,
	martin.petersen, hare, bvanassche

On Thu, Aug 08, 2019 at 12:10:55PM -0700, James Bottomley wrote:
> Since this will be an extension of something that exists both in your
> sg driver and in the block bsg interface (and thus needs an
> implementation there), I added both linux-block and linux-api to the cc
> (the latter because you're adding to an API).
> 
> Simply extending sg to use the v4 header protocol in uapi/linux/bsg.h
> is fine modulo the code being in the right form.  The problems are the
> new ioctls you want to add that would need to be present there as well.
>  The specific question being how we support async or non-blocking I/O
> on the sg and bsg interfaces.  The standard way we add asynchronous I/O
> is supposed to be via .poll on the file descriptor.  you already use
> read and write in sg and bsg doesn't have a polling interface, but it
> looks like we could use MSG to signal an ioctl is ready to be serviced
> for both.  Would shifting to a non-blocking poll based interface for
> ioctls remove the need to add these SG_IOSUBMIT/SG_IORECEIVE ioctls
> since we could now do everything over blocking or non-blocking SG_IO?

I've spent some wading through this patchset, and it is huge.  I thing
we need to stage it a bit better and split it into multiple.

 1) One (or maybe even multiple) with all the cleanups and minor
    speedups.  That alone is a lot of changes, and will take a while
    to settle
 2) extending the bsg/v4 API to /dev/sg.  I think that is very useful,
    although I need to look at the details a bit more
 3) adding a new async API.  While this seems very useful from the
    theoretical perspective, I really thing the guts need to be in
    common code and then be used by sg and the block device nodes
    (if it happens to be an ioctl).  What worries me a bit there
    is that we have another way to deal with async I/O.  I wonder
    if we can fit this into aio/io_uring somehow.  But I'd rather
    not even thing about that much until we've done all the groundwork.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 00/20] sg: add v4 interface
  2019-08-08 23:00     ` James Bottomley
@ 2019-08-14  4:19       ` Douglas Gilbert
  2019-08-15 17:30         ` Bart Van Assche
  0 siblings, 1 reply; 13+ messages in thread
From: Douglas Gilbert @ 2019-08-14  4:19 UTC (permalink / raw)
  To: James Bottomley, linux-scsi, linux-block, linux-api
  Cc: martin.petersen, hare, bvanassche, Arnd Bergmann, Tony Battersby

On 2019-08-09 1:00 a.m., James Bottomley wrote:
> On Thu, 2019-08-08 at 23:08 +0200, Douglas Gilbert wrote:
>> On 2019-08-08 9:10 p.m., James Bottomley wrote:
>>> On Wed, 2019-08-07 at 13:42 +0200, Douglas Gilbert wrote:
>>>> This patchset extends the SCSI generic (sg) driver found in
>>>> lk 5.3 .  The sg driver has a version number which is visible
>>>> via ioctl(SG_GET_VERSION_NUM) and is bumped from 3.5.36 to
>>>> 4.0.03 by this patchset. The additions and changes are
>>>> described in some detail in this long webpage:
>>>>       http://sg.danny.cz/sg/sg_v40.html
>>>>
>>>> Most new features described in the above webpage are not
>>>> implemented in this patchset.
>>>
>>> Since this will be an extension of something that exists both in
>>> your sg driver and in the block bsg interface (and thus needs an
>>> implementation there), I added both linux-block and linux-api to
>>> the cc (the latter because you're adding to an API).
>>
>> The SG_IO ioctl has been the synchronous SCSI pass-through interface
>> for over 15 years. Its quirk is that it takes two different formats
>> depending on the character device its file descriptor belongs to:
>>      - sg device file descriptor: sg v3 interface (struct sg_io_hdr)
>>      - bsg device file descriptor: sg v4 interface (struct sg_io_v4)
>>
>> I'm only proposing one change in the synchronous interface based
>> on the SG_IO ioctl: to additionally accept the sg v4 interface in
>> the sg driver.
> 
> Right, as I said below, adding v4 to sg looks to be fine.  We don't
> need v3 in BSG since v4 is a superset.
> 
>> Arnd Bergmann considers two interface structures through one ioctl
>> as undesirable but is prepared to accept SG_IO. His POV is as the
>> maintainer of 32/64 bit compatibility ioctls. The case of SG_IO
>> is a well established exception to his rules (i.e. a known evil).
>>
>> I don't believe extending ioctl SG_IO for asynchronous work is a
>> good idea. As pointed out above, it is already overloaded too
>> much. Additionally it would need further flags to differentiate
>> these cases:
>>      - sync/async
>>      - if async: submission or reception
>>      - and optionally if async: abort (an inflight request)
>>      - and do you want to add multiple requests in there too?
>>
>> So are you looking at reducing the number of ioctl to the absolute
>> minimum? If so I don't think the SG_IO ioctl is the correct vehicle
>> for that. It doesn't use the _IOR(W) macros, instead it is hard
>> coded at 0x2285 ***. And the size checking offered by the _IOR(W)
>> macro (on the object pointed to by the 3rd argument) is useless with
>> SG_IO because it takes two different sized objects. Worse, one of
>> those objects changes size between 32 and 64 bits, while the other
>> does not.
> 
> OK, so this is where interface design is important.  It's perfectly
> possible to design an async interface using the current v4 BSG SG_IO.
> You simply open the device O_NONBLOCK and send the ioctl which will
> return immediately.  To find out when it returns you poll (we'd
> obviously need to add polling in bsg, but that's fairly simple).  Since
>   poll/select/epoll is our standard async event handling mechanism, we
> should at least try it and have a good reason why it doesn't work
> before inventing our own equivalent.

I don't see much new interface design here over what was done by
Lawrence Foard 27 years ago. Linux cleverness has broken the
classic use of read() and write() in char drivers. Hard to hold that
against the original design. The design works if the last 27 years
is any guide and maps reasonably closely to how the mid-level
(or the block layer) issues SCSI commands. All other general
purpose OSes that I am aware of, with the exception of Darwin,
have a similar SCSI pass-through, at least on the sync side.

>> Stepping back, this started about 18 months ago when security
>> janitors got upset about the bsg driver's use of write()/read() for
>> its async interface. Linus Torvalds suggested SG_IOSUBMIT and
>> SG_IORECEIVE to replace write() and read() respectively in bsg.
>> Instead the write()/read() interface was removed from the bsg driver.
>> With it went the ability to submit multiple requests in one write()
>> call (by passing an array of sg_io_v4 objects rather than just one).
>> My intention is to re-add that capability in the sg driver, using the
>> ioctls that Linus suggested.
> 
> What I'm saying is let's discuss the interface design before we add it.
>   the submit/receive interface is essentially a packet one.  The problem
> with a packet one is that we're request/response where the response
> likely altered some user memory area, so each response has to be
> matched with the outbound request to find out what happened ... how is
> that done?

Well this has been fundamental to the sg driver design. There are
now three mechanisms, described below with the oldest first:
   1) pack_id: an identifying integer supplied by the user in the
      submit that can be given as input to the receive call so
      it either yields the (first) matching response, or waits (sync
      action) or returns saying nothing was found (async action).
      pack_id was present in the sg driver in version 1.0 of Linux
      in 1992. It still works fine.
   2) usr_ptr: sometimes described as a "closure pointer". User
      provided, untouched by the driver, and placed in the
      corresponding response. Present in sg v3 and v4 interfaces.
      In place over 15 years.
   3) tag: like pack_id but issued by the block layer so it is
      _output_ by the submit call. In practice it does not work
      reliably *** due to questionable design in the block layer. After
      blk_execute_rq_nowait() is called the tag is picked out of
      its data structure. However that command may not have started
      (so no valid tag) or could have already finished (a stale tag).
      Tag is a new feature in the proposed patchsets.

> One way to do it simply with existing SG_IO is to allow only one
> outstanding request per FD, so if you want five requests, you open five
> FDs and then poll all of them to see which one returns.  I'm not saying
> it's the best because FDs are a somewhat precious commodity, but it is
> possible and it's a simple extension of what exists today.

I believe this is adequately covered by the mechanisms above.

>> Initially I had both the sg v3 and v4 interfaces passing through the
>> two ioctls. Arnd Bergmann preferred that a separate pair of ioctls
>> be used for each interface. Hence SG_IOSUBMIT_V3 and SG_IORECEIVE_V3
>> were added for the v3 interface. And thus SG_IOSUBMIT and
>> SG_IORECEIVE only use the v4 interface. This cleaned up my code and
>> documentation. As a bonus, all four ioctls use the _IORW macros and
>> can check the fixed size of the third argument to each ioctl
>> invocation.
> 
> Can we talk first about how the interface should work.  We can get into
> assigning ioctls and what extensions are needed later.

I have been trying for 9 months to get some feedback on
the current design. The attitude seems to be present some
patches; no, those ones are too big, etc. As for my design
document at:
     http://sg.danny.cz/sg/sg_v40.html

tl;dr ?

Evidently write() and read() system calls have special features
that could be exploited by an attack via the bsg or sg drivers.
ioctl(SG_IOSUBMIT) is just write(sg_fd, ...) that is less
exploitable. There is a similar relationship between
ioctl(SG_IORECEIVE) an read(sg_fd, ...). Otherwise the async
interface design remains the same in the sg driver as it was
when introduced 27 years ago. ^^^

Bart Van Assche hinted at a better API design but didn't present
it. If he did, that would be the first time an alternate API
design was presented for async usage in the 20 years that I have
been associated with the driver.

>>> Simply extending sg to use the v4 header protocol in
>>> uapi/linux/bsg.h is fine modulo the code being in the right
>>> form.  The problems are the new ioctls you want to add that would
>>> need to be present there as well.
>>
>> Why? The bsg driver has never accepted the sg v3 interface. It has
>> also removed functionality that I'm trying to re-add in this, and a
>> follow-on patchset. The bsg driver has reduced its functionality as a
>> generalized SCSI pass-through, but it has more than enough other
>> roles to justify its existence, for example as a SMP (SAS) pass-
>> through and a driver/transport specific pass-through to various LLDs.
>> I don't see the need for the sg and bsg driver to move forward in
>> lockstep.
> 
> You quote Linus a lot above, but he also said "I wonder if we could at

twice

> least try to unify the bsg/sg code - possibly by making sg use the
> prettier bsg code (but definitely have to add all the security
> measures)".  It's in the interests of unity that we need to make the
> code paths look the same as possible, so eventually one could call the
> other.  ideally sg would call bsg for v4 and we'd add async to bsg so
> it would work for both.

The bsg driver needs to be rewritten. Ah, no need since by removing the
async (including multiple requests) part, the broken implementation is
no more. Its implementation is sufficient to issue synchronous
SCSI commands, not much more +++. The bsg v4 interface implementation
was only a superset of the sg driver's v3 interface while the Linux
kernel supported bidi. That has unwisely been (completely) removed
from the kernel. So now I would regard the bsg v4 implementation
as a subset of the functionality in the sg v3 implementation which
itself is a subset of the proposed sg v4 implementation. So I see
no reason to use the bsg v4 implementation or handicap the proposed
sg v4 implementation.

>>>    The specific question being how we support async or non-blocking
>>> I/O on the sg and bsg interfaces.  The standard way we add
>>> asynchronous I/O is supposed to be via .poll on the file
>>> descriptor.  you already use read and write in sg and bsg doesn't
>>> have a polling interface,
>>
>> It is hard to parse that last sentence; the sg driver has always
>> supported select/poll/epoll (and SIGIO plus RT signals). sg_poll()
>> will work just as well irrespective of whether a request is submitted
>> by write(), ioctl(SG_IOSUBMIT) or ioctl(SG_IOSUBMIT_V3).
> 
> The way I read how the interface works, it only works when the fd is
> ready for read (i.e. a packet submitted with read has returned).  We
> need to extend it to make it work with ioctl.  Since ioctl returns
> isn't either a 'ready to read' or 'ready to write' event on the fd I
> was suggesting using 'msg ready' event for it.

A ready response causes a POLL_IN event. POLL_OUT is only cleared when
the command queueing on a fd is turned off and a request is inflight or
awaiting a ioctl(SG_IORECEIVE) (a.k.a. read()). So the existing POLL
events map quite well. As a bonus, if there is a surprise removal of
a sg device, then a SIG_HUP is generated.

As I noted, if an app doesn't need to monitor multiple file
descriptors, then ioctl(SG_GET_NUM_WAITING) is an option. It been in
place for 20 years and is sped up by this patchset. I clocked it at
500 nanoconds per call (i.e. 2 million could be issued in one thread
in 1 second) on my laptop. And since you then find out how many are
waiting to be processed, the user could issue a multiple requests
ioctl(SG_IORECEIVE) and pick them all up in one system call invocation.

>>> but it looks like we could use MSG to signal an ioctl is ready to
>>> be serviced for both.  Would shifting to a non-blocking poll based
>>> interface for ioctls remove the need to add these
>>> SG_IOSUBMIT/SG_IORECEIVE ioctls since we could now do everything
>>> over blocking or non-blocking SG_IO?
>>
>> Not sure what the MSG is you refer to. The sg driver has had
>> ioctl(SG_GET_NUM_WAITING) for a long time. And it is made even
>> more lightweight in this patchset: it takes no locks, just reads one
>> atomic (a counter obviously) and returns. My guess is its faster the
>> select()/poll()/epoll() but doesn't have the ability to monitor
>> multiple file descriptors.
> 
> the poll interfaces don't tell you how many outstanding request you
> have, they tell you when some event happened on the fd.  The event
> we're looking for with async packets is an indication of what
> completed.

As well as ioctl(SG_GET_NUM_WAITING) described above there is also
ioctl(SG_GET_PACK_ID) which will return the pack_id (or tag) of the
response that has been waiting the longest to be processed (or -1
if nothing is waiting).

In the area of async polling the existing sg driver supports both
standard Unix techniques and driver-specific techniques that convey
extra information.

>> Here is the full set of extra ioctls I have, or will be proposing:
>>      SG_IOSUBMIT
>>      SG_IOSUBMIT_V3
>>      SG_IORECEIVE
>>      SG_IORECEIVE_V3
>>      SG_IOABORT
>>      SG_SG_SET_GET_EXTENDED
> 
> Well, i/o cancellation is another huge can of worms, but let's get the
> submit interface sorted out first before worrying about how
> cancellation works.

Yes, and its a great way to test code. The sgh_dd utility (sg3_utils
package, testing folder) has a ae=AEN option for "abort every n
commands". When it decides to abort a command it sets up another
(killer) thread that waits a random amount of time (within bounds)
then issues ioctl(SG_IOABORT). Great fun. For even more fun there
is ae=AEN,MAEN where the extra MAEN parameter is for "abort every
n multiple requests invocations". There are a lot more corner cases
when aborting a multiple requests invocation. Testing with these
found several problems with my code.

It seems to me we are not actually on the same page when it comes
to where this project stands.

>> They are all new style ioctls using the _IOR(W) macros with fixed
>> size objects referenced by the ioctl's third argument. ioctls have
>> been referred to as the "garbage bin of Unix". Well that last one is
>> a garbage bin within a garbage bin :-) On the plus side, it keeps
>> that list relatively short.
>>
>> Doug Gilbert
>>
>>
>> *** Tony Battersby is a sg driver power user. He has lamented wading
>> through very large logs looking for some hint of why the sg driver is
>> playing up he has stated the strong preference for more, not less,
>> ioctls.
> 
> I'm not really bothered about the number of ioctls; I'm bothered about
> getting the interface right.

Great. Then please leave ioctl(SG_IO) as is (i.e. sync only).

>> BTW the write()/read() interface still remains in the sg driver after
>> these patchsets. It will continue to only support the sg v3
>> interface. Perhaps calling it should cause a "deprecated" log message
>> once for each kernel run to annoy maintainers of old code.
> 
> That would be ideal given all the security complaints we have about it.

That is easily done.

Doug Gilbert


*** at least with scsi_debug with a command delay set at 5000 nanoseconds.
     With real storage devices with latencies around 50 microseconds or higher
     it may be more reliable. Some sort of handle or the tag itself back from
     blk_execute_rq_nowait() is the real solution.

+++ but the bsg driver can do much more than issue SCSI commands. Question
     is, given the removal of its SCSI async functionality, should it be
     generating SCSI commands at all? Just the general ioctl(SG_IO) as
     supported by sd, sr and st devices should suffice.

^^^ another approach to the write()/read() security problems would be for
     their implementation to check if the given file descriptor was to a
     char (and maybe block) device and if so reduce its exploitable behaviour
     accordingly.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 00/20] sg: add v4 interface
  2019-08-14  4:19       ` Douglas Gilbert
@ 2019-08-15 17:30         ` Bart Van Assche
  2019-08-16 15:59           ` Douglas Gilbert
  0 siblings, 1 reply; 13+ messages in thread
From: Bart Van Assche @ 2019-08-15 17:30 UTC (permalink / raw)
  To: dgilbert, James Bottomley, linux-scsi, linux-block, linux-api
  Cc: martin.petersen, hare, Arnd Bergmann, Tony Battersby, Christoph Hellwig

On 8/13/19 9:19 PM, Douglas Gilbert wrote:
> Bart Van Assche hinted at a better API design but didn't present
> it. If he did, that would be the first time an alternate API
> design was presented for async usage in the 20 years that I have
> been associated with the driver.

I would like to start from the use cases instead of the implementation 
of a new SG/IO interface. My employer uses the SG/IO interface for 
controlling SMR and HSMR disks. What we need is the ability to discover, 
read, write and configure such disks, support for the non-standard HSMR 
flex protocol, the ability to give certain users or groups access to a 
subset of the LBAs and also the ability to make that information 
persistent. I think that such functionality could be implemented by 
extending LVM and by adding support for all ZBC commands we need in the 
block layer, device mapper layer and also in the asynchronous I/O layer. 
The block, dm and aio layers already support submitting commands 
asynchronously but do not yet support all the ZBC commands that we use.

Are there any SG/IO use cases that have not yet been mentioned in this 
e-mail thread? If SMR and HSMR are the primary use cases for SG/IO, 
should asynchronous command support be added in the SG/IO layer or 
should rather ZBC support in the block, dm and aio layers be improved?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 00/20] sg: add v4 interface
  2019-08-15 17:30         ` Bart Van Assche
@ 2019-08-16 15:59           ` Douglas Gilbert
  2019-08-16 17:19             ` Greg KH
  2019-08-16 18:10             ` Bart Van Assche
  0 siblings, 2 replies; 13+ messages in thread
From: Douglas Gilbert @ 2019-08-16 15:59 UTC (permalink / raw)
  To: Bart Van Assche, James Bottomley, linux-scsi, linux-block, linux-api
  Cc: martin.petersen, hare, Arnd Bergmann, Tony Battersby, Christoph Hellwig

On 2019-08-15 1:30 p.m., Bart Van Assche wrote:
> On 8/13/19 9:19 PM, Douglas Gilbert wrote:
>> Bart Van Assche hinted at a better API design but didn't present
>> it. If he did, that would be the first time an alternate API
>> design was presented for async usage in the 20 years that I have
>> been associated with the driver.
> 
> I would like to start from the use cases instead of the implementation of a new 
> SG/IO interface. My employer uses the SG/IO interface for controlling SMR and 

There is no "new" SG/IO interface. Linux has broken the ability of char
drivers to safely use the read() and write() system calls. This
adversely impacts the bsg and sg drivers. In response the following
replacement mappings have been suggested in my first sg patchset:

1) For sg driver currently in production, its async interface:
        write(sg_fd, &sg_v3_obj, sizeof(sg_v3_obj))
          ----->  ioctl(sg_fd, SG_IOSUBMIT_V3, &sg_v3_obj)
    and
        read(sg_fd, &sg_v3_obj, sizeof(sg_v3_obj))
          ----->  ioctl(sg_fd, SG_RECEIVE_V3, &sg_v3_obj)

    And send out a WARN_ONCE when write(sg_fd, &sg_v3_obj,...) is used.

2) For the async portion of the bsg driver that was removed last
    year, the following, slightly more complex mapping is proposed:
        write(bsg_fd, &sg_v4_obj, sizeof(sg_v4_obj))
          ----->  ioctl(sg_fd_equiv_bsg, SG_IOSUBMIT, &sg_v4_obj)
    and
        read(bsg_fd, &sg_v4_obj, sizeof(sg_v4_obj))
          ----->  ioctl(sg_fd_equiv_bsg, SG_RECEIVE, &sg_v4_obj)

    The bsg_fd --> sg_fd_equiv_bsg mapping can be done with the help
    of sysfs.


There is another case with the bsg async interface where the third
argument to write() and read() is a multiple of the size of a sg_v4_obj.
I call that a multiple requests invocation. That is handled in my second
patchset with an extra level of indirection. Yes, that is a change in
the API, but it is more on the syntax side rather than the semantics side.

The ioctls have another advantage over the write()/read() interface.
The reader will notice the both SG_IOSUBMIT and SG_IORECEIVE are defined
with the _IOWR() macro indicating bi-directional dataflow. The "reverse"
direction dataflow for the submit side is when a tag is sent back from
the block layer. For the receive side the reverse flow is when matching
either by pack_id or tag.

Also some longstanding features of the sg async API such as
ioctl(SG_GET_NUM_WAITING) can lead to a reduction in API traffic. Say
we have 20 SCSI commands that don't depend on one another (e.g. READ
GATHERED). They could be submitted asynchronously with a single
multiple requests invocation by ioctl(SG_IOSUBMIT) with the flag
SGV4_FLAG_IMMED set. The user code could then wait for one (any one)
to finish and process it (so that is two API calls so far). Now an
ioctl(SG_GET_NUM_WAITING) could be issued and say it gets 3 then a
multiple requests invocation of ioctl(SG_IORECEIVE) for those 3
could be sent and complete promptly. Now the tally of API calls is
up to 4. If another ioctl(SG_GET_NUM_WAITING) was issued and say
it yielded 16 then a multiple requests invocation of
ioctl(SG_IORECEIVE) for those 16 would complete the originally
submitted 20 SCSI commands. The total tally of API calls is 6 with
only 1 of those waiting. The wait could be made fully async by
using a polling loop or a signal to replace that (and any other)
wait.

If the user space didn't mind blocking then the whole 20 SCSI commands
could be processed efficiently with a single multiple requests
invocation using ioctl(SG_IOSUBMIT) with the SGV4_FLAG_IMMED flag
cleared. It would first issue all 20 command then return after all 20
commands were complete. That is an extension of the removed bsg async
SCSI API, but a pretty good one IMO.

The sg driver's async model remains basically the same as when the
driver first appeared in 1992. Naturally there have been enhancements
along the way, such as that last example.

> HSMR disks. What we need is the ability to discover, read, write and configure 
> such disks, support for the non-standard HSMR flex protocol, the ability to give 
> certain users or groups access to a subset of the LBAs and also the ability to 
> make that information persistent. I think that such functionality could be 
> implemented by extending LVM and by adding support for all ZBC commands we need 
> in the block layer, device mapper layer and also in the asynchronous I/O layer. 
> The block, dm and aio layers already support submitting commands asynchronously 
> but do not yet support all the ZBC commands that we use.

I believe that you will find that the more layers of abstraction that are
placed between the actual device and the OS level API, the more difficult
the discovery process will be. And in some cases you will need to get to
a management layer to let those management functions "pass-through" those
layers. Some RAID card drivers take advantage of the no_uld_attach flag in
scsi_device to expose real devices, but only to the sg/bsg interface for
management purposes (for utilities like smartmontools) and do not produce
sd device nodes.

> Are there any SG/IO use cases that have not yet been mentioned in this e-mail 
> thread? If SMR and HSMR are the primary use cases for SG/IO, should asynchronous 
> command support be added in the SG/IO layer or should rather ZBC support in the 
> block, dm and aio layers be improved?

My guess is quite a few, and the companies involved don't want to talk about
their use publicly. For example when a big computer company starts reporting
errors, I believe my role is to try and fix the errors, not to interrogate
them about how and why they are using the driver. On the other hand, Tony
Battersby has been relatively active on this list and produced patches for
the sg driver over several years. Tony is well positioned to know the
driver's  strengths and weaknesses but has said that he has little time to
review these patchsets. I appreciate any feedback I can get from him.

Doug Gilbert



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 00/20] sg: add v4 interface
  2019-08-16 15:59           ` Douglas Gilbert
@ 2019-08-16 17:19             ` Greg KH
  2019-08-16 18:10             ` Bart Van Assche
  1 sibling, 0 replies; 13+ messages in thread
From: Greg KH @ 2019-08-16 17:19 UTC (permalink / raw)
  To: Douglas Gilbert
  Cc: Bart Van Assche, James Bottomley, linux-scsi, linux-block,
	linux-api, martin.petersen, hare, Arnd Bergmann, Tony Battersby,
	Christoph Hellwig

On Fri, Aug 16, 2019 at 11:59:04AM -0400, Douglas Gilbert wrote:
> On 2019-08-15 1:30 p.m., Bart Van Assche wrote:
> > On 8/13/19 9:19 PM, Douglas Gilbert wrote:
> > > Bart Van Assche hinted at a better API design but didn't present
> > > it. If he did, that would be the first time an alternate API
> > > design was presented for async usage in the 20 years that I have
> > > been associated with the driver.
> > 
> > I would like to start from the use cases instead of the implementation
> > of a new SG/IO interface. My employer uses the SG/IO interface for
> > controlling SMR and
> 
> There is no "new" SG/IO interface. Linux has broken the ability of char
> drivers to safely use the read() and write() system calls. This
> adversely impacts the bsg and sg drivers. In response the following
> replacement mappings have been suggested in my first sg patchset:

I don't understand what you mean by "broken the ability of char drivers
to use the read() and write() system calls."

What changed, when did it change, what broke, and why can't we "fix" it
if it really is broken?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 00/20] sg: add v4 interface
  2019-08-16 15:59           ` Douglas Gilbert
  2019-08-16 17:19             ` Greg KH
@ 2019-08-16 18:10             ` Bart Van Assche
  2019-08-16 18:44               ` Douglas Gilbert
  1 sibling, 1 reply; 13+ messages in thread
From: Bart Van Assche @ 2019-08-16 18:10 UTC (permalink / raw)
  To: dgilbert, James Bottomley, linux-scsi, linux-block, linux-api
  Cc: martin.petersen, hare, Arnd Bergmann, Tony Battersby, Christoph Hellwig

On 8/16/19 8:59 AM, Douglas Gilbert wrote:
> On 2019-08-15 1:30 p.m., Bart Van Assche wrote:
>> HSMR disks. What we need is the ability to discover, read, write and 
>> configure such disks, support for the non-standard HSMR flex protocol, 
>> the ability to give certain users or groups access to a subset of the 
>> LBAs and also the ability to make that information persistent. I think 
>> that such functionality could be implemented by extending LVM and by 
>> adding support for all ZBC commands we need in the block layer, device 
>> mapper layer and also in the asynchronous I/O layer. The block, dm and 
>> aio layers already support submitting commands asynchronously but do 
>> not yet support all the ZBC commands that we use.
> 
> I believe that you will find that the more layers of abstraction that are
> placed between the actual device and the OS level API, the more difficult
> the discovery process will be. And in some cases you will need to get to
> a management layer to let those management functions "pass-through" those
> layers. Some RAID card drivers take advantage of the no_uld_attach flag in
> scsi_device to expose real devices, but only to the sg/bsg interface for
> management purposes (for utilities like smartmontools) and do not produce
> sd device nodes.

Isn't the very purpose of an operating system to provide device drivers 
and other abstraction layers such that not every application has to 
implement these?

My opinion is that using SG/IO to control SMR disks is suboptimal. A 
very powerful feature of the Linux block layer is the ability to stack 
block drivers. SG/IO is fundamentally incompatible with stacking block 
drivers. Stacking requires having access to the LBA, request size and 
other block layer request attributes. I don't think that we want to add 
code for parsing SCSI, NVMe pass-through commands etc. in block drivers 
as the device mapper.

Hence my proposal to improve support in the block layer for ZBC instead 
of using SG/IO to control SMR disks.

Bart.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 00/20] sg: add v4 interface
  2019-08-16 18:10             ` Bart Van Assche
@ 2019-08-16 18:44               ` Douglas Gilbert
  0 siblings, 0 replies; 13+ messages in thread
From: Douglas Gilbert @ 2019-08-16 18:44 UTC (permalink / raw)
  To: Bart Van Assche, James Bottomley, linux-scsi, linux-block, linux-api
  Cc: martin.petersen, hare, Arnd Bergmann, Tony Battersby,
	Christoph Hellwig, Christian Franke

On 2019-08-16 2:10 p.m., Bart Van Assche wrote:
> On 8/16/19 8:59 AM, Douglas Gilbert wrote:
>> On 2019-08-15 1:30 p.m., Bart Van Assche wrote:
>>> HSMR disks. What we need is the ability to discover, read, write and 
>>> configure such disks, support for the non-standard HSMR flex protocol, the 
>>> ability to give certain users or groups access to a subset of the LBAs and 
>>> also the ability to make that information persistent. I think that such 
>>> functionality could be implemented by extending LVM and by adding support for 
>>> all ZBC commands we need in the block layer, device mapper layer and also in 
>>> the asynchronous I/O layer. The block, dm and aio layers already support 
>>> submitting commands asynchronously but do not yet support all the ZBC 
>>> commands that we use.
>>
>> I believe that you will find that the more layers of abstraction that are
>> placed between the actual device and the OS level API, the more difficult
>> the discovery process will be. And in some cases you will need to get to
>> a management layer to let those management functions "pass-through" those
>> layers. Some RAID card drivers take advantage of the no_uld_attach flag in
>> scsi_device to expose real devices, but only to the sg/bsg interface for
>> management purposes (for utilities like smartmontools) and do not produce
>> sd device nodes.
> 
> Isn't the very purpose of an operating system to provide device drivers and 
> other abstraction layers such that not every application has to implement these?
> 
> My opinion is that using SG/IO to control SMR disks is suboptimal. A very 
> powerful feature of the Linux block layer is the ability to stack block drivers. 
> SG/IO is fundamentally incompatible with stacking block drivers. Stacking 
> requires having access to the LBA, request size and other block layer request 
> attributes. I don't think that we want to add code for parsing SCSI, NVMe 
> pass-through commands etc. in block drivers as the device mapper.
> 
> Hence my proposal to improve support in the block layer for ZBC instead of using 
> SG/IO to control SMR disks.

Please go right ahead. I just don't see why it should be at the expense
of this patchset or the sg pass-through in general.

Wearing another hat as the smartmontools SCSI maintainer, I know of no
supported OS or RAID product that has the infrastructure you talk about
for SMART information. The industry assumption seems to be that the
information will be pulled out at the real device level by a command set
pass-through. And RAID products present an issue if they don't support
a "no_uld_attach" type mechanism. The issue is that they then need to
offer a proprietary pass-through mechanism *** to bypass the virtual
device presented to the OS in order to get to the individual _real_
storage devices holding the SMART information. Sub-optimal maybe, but
still effective.

Doug Gilbert


*** and to their credit (IMO) several RAID "big boys" have sent the
     smartmontools project working code to navigate their proprietary
     pass-through mechanisms. Better that than us needing to beg for it.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-08-16 18:44 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20190807114252.2565-1-dgilbert@interlog.com>
2019-08-08 19:10 ` [PATCH v3 00/20] sg: add v4 interface James Bottomley
2019-08-08 21:08   ` Douglas Gilbert
2019-08-08 21:37     ` Tony Battersby
2019-08-08 22:25       ` Bart Van Assche
2019-08-09 13:28         ` Tony Battersby
2019-08-08 23:00     ` James Bottomley
2019-08-14  4:19       ` Douglas Gilbert
2019-08-15 17:30         ` Bart Van Assche
2019-08-16 15:59           ` Douglas Gilbert
2019-08-16 17:19             ` Greg KH
2019-08-16 18:10             ` Bart Van Assche
2019-08-16 18:44               ` Douglas Gilbert
2019-08-12 15:23   ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).