All of lore.kernel.org
 help / color / mirror / Atom feed
* Recommended HBA management interfaces
@ 2009-07-17 13:16 Mukker, Atul
  2009-07-17 15:35 ` Brian King
  0 siblings, 1 reply; 13+ messages in thread
From: Mukker, Atul @ 2009-07-17 13:16 UTC (permalink / raw)
  To: linux-scsi

Hi All,

We would like expert comments on the following questions regarding management of HBA from applications.

Traditionally, our drivers create a character device node, whose file_operations are then used by the management applications to transfer HBA specific commands. In addition to being quirky, this interface has a few limitations which we would like to remove, most important being able to seamlessly handle asynchronous events with data transfer.

1. What is (are) the other standard/recommended interfaces which applications can use to transfer HBA specific commands and data.

2. How should an LLD implement interfaces to transmit asynchronous information to the management applications? The requirement is to be able to transmit data buffer as well as notifications for events.

3. The interface should be able to work even if no SCSI devices are exported to the kernel.

4. Should work seamlessly across vmware and xen kernels.

Thanks
Atul Mukker
LSI Corp.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Recommended HBA management interfaces
  2009-07-17 13:16 Recommended HBA management interfaces Mukker, Atul
@ 2009-07-17 15:35 ` Brian King
  2009-07-20 16:28   ` Mukker, Atul
  0 siblings, 1 reply; 13+ messages in thread
From: Brian King @ 2009-07-17 15:35 UTC (permalink / raw)
  To: Mukker, Atul; +Cc: linux-scsi

Mukker, Atul wrote:
> Hi All,
> 
> We would like expert comments on the following questions regarding
> management of HBA from applications.
> 
> Traditionally, our drivers create a character device node, whose
> file_operations are then used by the management applications to
> transfer HBA specific commands. In addition to being quirky, this
> interface has a few limitations which we would like to remove, most
> important being able to seamlessly handle asynchronous events with
> data transfer.
> 
> 1. What is (are) the other standard/recommended interfaces which
> applications can use to transfer HBA specific commands and data.

Depends on what the commands look like. With ipr, the commands that
the management application need to send to the HBA look sufficiently
like SCSI that I was able to report an sg device node for the adapter
and use SG_IO to send these commands.

sysfs, debugfs, and configfs are options as well.


> 2. How should an LLD implement interfaces to transmit asynchronous
> information to the management applications? The requirement is to be
> able to transmit data buffer as well as notifications for events.

I've had good success with netlink. In my use I only send a notification
to userspace and let the application send some commands to figure out
what happened, but netlink does allow to send data as well. It makes it very
easy to have multiple concurrent readers of the data, which I've found very
useful.

> 3. The interface should be able to work even if no SCSI devices are
> exported to the kernel.

netlink allows this.

> 4. Should work seamlessly across vmware and xen kernels.

netlink should work here too.

-Brian

-- 
Brian King
Linux on Power Virtualization
IBM Linux Technology Center



^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: Recommended HBA management interfaces
  2009-07-17 15:35 ` Brian King
@ 2009-07-20 16:28   ` Mukker, Atul
  2009-07-20 16:57     ` James Smart
  0 siblings, 1 reply; 13+ messages in thread
From: Mukker, Atul @ 2009-07-20 16:28 UTC (permalink / raw)
  To: Brian King; +Cc: linux-scsi

Thanks Brian. Netlink seems to be appropriate for our purpose as well, almost too good :-)

That make me think, what's the catch? The SCSI drivers are not heavy usage of this interface for one.

Are the other caveats associated with it?

Best regards,
Atul Mukker

> -----Original Message-----
> From: Brian King [mailto:brking@linux.vnet.ibm.com]
> Sent: Friday, July 17, 2009 11:36 AM
> To: Mukker, Atul
> Cc: linux-scsi@vger.kernel.org
> Subject: Re: Recommended HBA management interfaces
> 
> Mukker, Atul wrote:
> > Hi All,
> >
> > We would like expert comments on the following questions regarding
> > management of HBA from applications.
> >
> > Traditionally, our drivers create a character device node, whose
> > file_operations are then used by the management applications to
> > transfer HBA specific commands. In addition to being quirky, this
> > interface has a few limitations which we would like to remove, most
> > important being able to seamlessly handle asynchronous events with
> > data transfer.
> >
> > 1. What is (are) the other standard/recommended interfaces which
> > applications can use to transfer HBA specific commands and data.
> 
> Depends on what the commands look like. With ipr, the commands that
> the management application need to send to the HBA look sufficiently
> like SCSI that I was able to report an sg device node for the adapter
> and use SG_IO to send these commands.
> 
> sysfs, debugfs, and configfs are options as well.
> 
> 
> > 2. How should an LLD implement interfaces to transmit asynchronous
> > information to the management applications? The requirement is to be
> > able to transmit data buffer as well as notifications for events.
> 
> I've had good success with netlink. In my use I only send a notification
> to userspace and let the application send some commands to figure out
> what happened, but netlink does allow to send data as well. It makes it
> very
> easy to have multiple concurrent readers of the data, which I've found
> very
> useful.
> 
> > 3. The interface should be able to work even if no SCSI devices are
> > exported to the kernel.
> 
> netlink allows this.
> 
> > 4. Should work seamlessly across vmware and xen kernels.
> 
> netlink should work here too.
> 
> -Brian
> 
> --
> Brian King
> Linux on Power Virtualization
> IBM Linux Technology Center
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Recommended HBA management interfaces
  2009-07-20 16:28   ` Mukker, Atul
@ 2009-07-20 16:57     ` James Smart
  2009-07-20 18:03       ` Mukker, Atul
  0 siblings, 1 reply; 13+ messages in thread
From: James Smart @ 2009-07-20 16:57 UTC (permalink / raw)
  To: Mukker, Atul; +Cc: Brian King, linux-scsi

FYI - netlink (and sysfs, and I believe debugfs) do not exist with 
vmware drivers...   Additionally, with netlink, many of the distros no 
longer include libnl by default in their install images.  Even 
interfaces that you think exist on vmware, may have very different 
semantical behavior (almost all of the transport stuff either doesn't 
exist or is only partially implemented).

One big caveat I'd give you:  It's not so much the interface being used, 
but rather, what are you doing over the interface.  One of the goals of 
the community is to present a consistent management paradigm for like 
things.  Thus, if what you are doing is generic, you should do it in a 
generic manner so that all drivers for like hardware can utilize it.  
This was the motivation for the protocol transports. Interestingly, even 
the transports use different interfaces for different things. It all 
depends on what it is.

Lastly, some things are considered bad practice from a kernel safety 
point of view. Example: driver-specific ioctls passing around user-space 
buffer pointers.  In these cases, it doesn't matter what interface you 
pick, they'll be rejected.

-- james s


Mukker, Atul wrote:
> Thanks Brian. Netlink seems to be appropriate for our purpose as well, almost too good :-)
>
> That make me think, what's the catch? The SCSI drivers are not heavy usage of this interface for one.
>
> Are the other caveats associated with it?
>
> Best regards,
> Atul Mukker
>
>   
>> -----Original Message-----
>> From: Brian King [mailto:brking@linux.vnet.ibm.com]
>> Sent: Friday, July 17, 2009 11:36 AM
>> To: Mukker, Atul
>> Cc: linux-scsi@vger.kernel.org
>> Subject: Re: Recommended HBA management interfaces
>>
>> Mukker, Atul wrote:
>>     
>>> Hi All,
>>>
>>> We would like expert comments on the following questions regarding
>>> management of HBA from applications.
>>>
>>> Traditionally, our drivers create a character device node, whose
>>> file_operations are then used by the management applications to
>>> transfer HBA specific commands. In addition to being quirky, this
>>> interface has a few limitations which we would like to remove, most
>>> important being able to seamlessly handle asynchronous events with
>>> data transfer.
>>>
>>> 1. What is (are) the other standard/recommended interfaces which
>>> applications can use to transfer HBA specific commands and data.
>>>       
>> Depends on what the commands look like. With ipr, the commands that
>> the management application need to send to the HBA look sufficiently
>> like SCSI that I was able to report an sg device node for the adapter
>> and use SG_IO to send these commands.
>>
>> sysfs, debugfs, and configfs are options as well.
>>
>>
>>     
>>> 2. How should an LLD implement interfaces to transmit asynchronous
>>> information to the management applications? The requirement is to be
>>> able to transmit data buffer as well as notifications for events.
>>>       
>> I've had good success with netlink. In my use I only send a notification
>> to userspace and let the application send some commands to figure out
>> what happened, but netlink does allow to send data as well. It makes it
>> very
>> easy to have multiple concurrent readers of the data, which I've found
>> very
>> useful.
>>
>>     
>>> 3. The interface should be able to work even if no SCSI devices are
>>> exported to the kernel.
>>>       
>> netlink allows this.
>>
>>     
>>> 4. Should work seamlessly across vmware and xen kernels.
>>>       
>> netlink should work here too.
>>
>> -Brian
>>
>> --
>> Brian King
>> Linux on Power Virtualization
>> IBM Linux Technology Center
>>
>>     
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>   

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: Recommended HBA management interfaces
  2009-07-20 16:57     ` James Smart
@ 2009-07-20 18:03       ` Mukker, Atul
  2009-07-20 19:08         ` James Smart
  0 siblings, 1 reply; 13+ messages in thread
From: Mukker, Atul @ 2009-07-20 18:03 UTC (permalink / raw)
  To: James Smart; +Cc: Brian King, linux-scsi

Thanks for restating my original question.

1. What interface should be used by the HBA management applications to obtain (non-generic) information from the HBA?

2. How should driver notify such applications of asynchronous events happening on the HBA?

Please keep in mind, all the data transfer between the applications and the HBA is a private protocol.

Thanks
Atul Mukker
 

> -----Original Message-----
> From: James Smart [mailto:James.Smart@Emulex.Com]
> Sent: Monday, July 20, 2009 12:58 PM
> To: Mukker, Atul
> Cc: Brian King; linux-scsi@vger.kernel.org
> Subject: Re: Recommended HBA management interfaces
> 
> FYI - netlink (and sysfs, and I believe debugfs) do not exist with
> vmware drivers...   Additionally, with netlink, many of the distros no
> longer include libnl by default in their install images.  Even
> interfaces that you think exist on vmware, may have very different
> semantical behavior (almost all of the transport stuff either doesn't
> exist or is only partially implemented).
> 
> One big caveat I'd give you:  It's not so much the interface being used,
> but rather, what are you doing over the interface.  One of the goals of
> the community is to present a consistent management paradigm for like
> things.  Thus, if what you are doing is generic, you should do it in a
> generic manner so that all drivers for like hardware can utilize it.
> This was the motivation for the protocol transports. Interestingly, even
> the transports use different interfaces for different things. It all
> depends on what it is.
> 
> Lastly, some things are considered bad practice from a kernel safety
> point of view. Example: driver-specific ioctls passing around user-space
> buffer pointers.  In these cases, it doesn't matter what interface you
> pick, they'll be rejected.
> 
> -- james s
> 
> 
> Mukker, Atul wrote:
> > Thanks Brian. Netlink seems to be appropriate for our purpose as well,
> almost too good :-)
> >
> > That make me think, what's the catch? The SCSI drivers are not heavy
> usage of this interface for one.
> >
> > Are the other caveats associated with it?
> >
> > Best regards,
> > Atul Mukker
> >
> >
> >> -----Original Message-----
> >> From: Brian King [mailto:brking@linux.vnet.ibm.com]
> >> Sent: Friday, July 17, 2009 11:36 AM
> >> To: Mukker, Atul
> >> Cc: linux-scsi@vger.kernel.org
> >> Subject: Re: Recommended HBA management interfaces
> >>
> >> Mukker, Atul wrote:
> >>
> >>> Hi All,
> >>>
> >>> We would like expert comments on the following questions regarding
> >>> management of HBA from applications.
> >>>
> >>> Traditionally, our drivers create a character device node, whose
> >>> file_operations are then used by the management applications to
> >>> transfer HBA specific commands. In addition to being quirky, this
> >>> interface has a few limitations which we would like to remove, most
> >>> important being able to seamlessly handle asynchronous events with
> >>> data transfer.
> >>>
> >>> 1. What is (are) the other standard/recommended interfaces which
> >>> applications can use to transfer HBA specific commands and data.
> >>>
> >> Depends on what the commands look like. With ipr, the commands that
> >> the management application need to send to the HBA look sufficiently
> >> like SCSI that I was able to report an sg device node for the adapter
> >> and use SG_IO to send these commands.
> >>
> >> sysfs, debugfs, and configfs are options as well.
> >>
> >>
> >>
> >>> 2. How should an LLD implement interfaces to transmit asynchronous
> >>> information to the management applications? The requirement is to be
> >>> able to transmit data buffer as well as notifications for events.
> >>>
> >> I've had good success with netlink. In my use I only send a
> notification
> >> to userspace and let the application send some commands to figure out
> >> what happened, but netlink does allow to send data as well. It makes it
> >> very
> >> easy to have multiple concurrent readers of the data, which I've found
> >> very
> >> useful.
> >>
> >>
> >>> 3. The interface should be able to work even if no SCSI devices are
> >>> exported to the kernel.
> >>>
> >> netlink allows this.
> >>
> >>
> >>> 4. Should work seamlessly across vmware and xen kernels.
> >>>
> >> netlink should work here too.
> >>
> >> -Brian
> >>
> >> --
> >> Brian King
> >> Linux on Power Virtualization
> >> IBM Linux Technology Center
> >>
> >>
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> >

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Recommended HBA management interfaces
  2009-07-20 18:03       ` Mukker, Atul
@ 2009-07-20 19:08         ` James Smart
  2009-07-20 20:33           ` Mukker, Atul
  0 siblings, 1 reply; 13+ messages in thread
From: James Smart @ 2009-07-20 19:08 UTC (permalink / raw)
  To: Mukker, Atul; +Cc: Brian King, linux-scsi

Mukker, Atul wrote:
> Thanks for restating my original question.
>
> 1. What interface should be used by the HBA management applications to obtain (non-generic) information from the HBA?
>   
My opinions:

sysfs :
  Pro: Good for singular data items and simple status  (link state, f/w 
rev, etc).
         Very good for things that really don't need a tool (simplistic 
admin commands):
             (show state, reset board, etc).
  Con: Doesn't work well for "transactions" that need multiple data elements
          Lack of insight to process life cycle, thus multi-step and 
concurrent
             transactions difficult.
          Doesn't work with binary data, buffers, etc.
          Difficult to use concurrently by multiple processes.
          Can't push async info to user.
          No support for complex things.
          The list of attributes can get big. Not a big deal, but...
          Security based on attribute permissions (not always the best 
model)

configfs:
  Pro: Basically sysfs but for transactions with multiple data elements
  Con: (same as sysfs, just minus multiple data element con).

netlink:
  Pro: Very good for "multi-cast" operations - pushing async events to 
multiple
              receivers.
         Handles requests and responses with multiple data elements easily.
         Can track per-process life cycles.
         Socket based so could even support mgmt from a different machine.
         Security checking easy to build in.
  Con: Doesn't work well for large payloads.
         Payloads can't be referenced via data pointer (they need to be 
inline to the pkt).
         Direct DMA not supported - has to be staged to driver buffer, 
copied in/out
            of socket.
         Multi-step transactions doable, but difficult. Maintaining 
relationships per
            pid difficult.
         Multiple machines means dealing with endian-ness and data typing.
         The netlink sockets do have memory-related issues that must be 
watched.

   Note: to not burn NETLINK id space, and perhaps collide in different 
distro
        kernels, please use the mid-layers netlink infrastructure, which 
does allow
        driver-specific messaging.

bsg:
  (Specifically the new midlayer sgio support that was recently added 
for ELS passthru)
  Pro: Support requests and responses with multiple data elements easily
         Supports separate request and response DMA-able payload buffers
         Supports big payloads easily
  Con: Lack of insight to process lifecycle, thus multi-step and concurrent
             transactions difficult.
          Async response generation (w/o associated request) very difficult.
          It's really a wrappered ioctl, with the midlayer protecting 
the kernel from
             bad ioctl practice via the way it converts the sgio ioctl 
into a midlayer
             request. Creates an odd programming interface, as you 
really want to
             wrapper the ioctl on the user side too.

Thus, when you look across the pros and cons, its easy to see why the 
transport
is using different things for different purposes.

> 2. How should driver notify such applications of asynchronous events happening on the HBA?
>   
This is already there with the midlayer netlink support.  Vendor-unique 
events
are already supported.

> Please keep in mind, all the data transfer between the applications and the HBA is a private protocol.
>   
Private or not, the code for the interface use will have to be in the 
driver.  Code will
be inspected for proper/safe usage of the interfaces.  Coding such that 
things in the
messaging are black-boxes will always be a point of contention.

-- james s


> Thanks
> Atul Mukker
>  
>
>   
>> -----Original Message-----
>> From: James Smart [mailto:James.Smart@Emulex.Com]
>> Sent: Monday, July 20, 2009 12:58 PM
>> To: Mukker, Atul
>> Cc: Brian King; linux-scsi@vger.kernel.org
>> Subject: Re: Recommended HBA management interfaces
>>
>> FYI - netlink (and sysfs, and I believe debugfs) do not exist with
>> vmware drivers...   Additionally, with netlink, many of the distros no
>> longer include libnl by default in their install images.  Even
>> interfaces that you think exist on vmware, may have very different
>> semantical behavior (almost all of the transport stuff either doesn't
>> exist or is only partially implemented).
>>
>> One big caveat I'd give you:  It's not so much the interface being used,
>> but rather, what are you doing over the interface.  One of the goals of
>> the community is to present a consistent management paradigm for like
>> things.  Thus, if what you are doing is generic, you should do it in a
>> generic manner so that all drivers for like hardware can utilize it.
>> This was the motivation for the protocol transports. Interestingly, even
>> the transports use different interfaces for different things. It all
>> depends on what it is.
>>
>> Lastly, some things are considered bad practice from a kernel safety
>> point of view. Example: driver-specific ioctls passing around user-space
>> buffer pointers.  In these cases, it doesn't matter what interface you
>> pick, they'll be rejected.
>>
>> -- james s
>>
>>
>> Mukker, Atul wrote:
>>     
>>> Thanks Brian. Netlink seems to be appropriate for our purpose as well,
>>>       
>> almost too good :-)
>>     
>>> That make me think, what's the catch? The SCSI drivers are not heavy
>>>       
>> usage of this interface for one.
>>     
>>> Are the other caveats associated with it?
>>>
>>> Best regards,
>>> Atul Mukker
>>>
>>>
>>>       
>>>> -----Original Message-----
>>>> From: Brian King [mailto:brking@linux.vnet.ibm.com]
>>>> Sent: Friday, July 17, 2009 11:36 AM
>>>> To: Mukker, Atul
>>>> Cc: linux-scsi@vger.kernel.org
>>>> Subject: Re: Recommended HBA management interfaces
>>>>
>>>> Mukker, Atul wrote:
>>>>
>>>>         
>>>>> Hi All,
>>>>>
>>>>> We would like expert comments on the following questions regarding
>>>>> management of HBA from applications.
>>>>>
>>>>> Traditionally, our drivers create a character device node, whose
>>>>> file_operations are then used by the management applications to
>>>>> transfer HBA specific commands. In addition to being quirky, this
>>>>> interface has a few limitations which we would like to remove, most
>>>>> important being able to seamlessly handle asynchronous events with
>>>>> data transfer.
>>>>>
>>>>> 1. What is (are) the other standard/recommended interfaces which
>>>>> applications can use to transfer HBA specific commands and data.
>>>>>
>>>>>           
>>>> Depends on what the commands look like. With ipr, the commands that
>>>> the management application need to send to the HBA look sufficiently
>>>> like SCSI that I was able to report an sg device node for the adapter
>>>> and use SG_IO to send these commands.
>>>>
>>>> sysfs, debugfs, and configfs are options as well.
>>>>
>>>>
>>>>
>>>>         
>>>>> 2. How should an LLD implement interfaces to transmit asynchronous
>>>>> information to the management applications? The requirement is to be
>>>>> able to transmit data buffer as well as notifications for events.
>>>>>
>>>>>           
>>>> I've had good success with netlink. In my use I only send a
>>>>         
>> notification
>>     
>>>> to userspace and let the application send some commands to figure out
>>>> what happened, but netlink does allow to send data as well. It makes it
>>>> very
>>>> easy to have multiple concurrent readers of the data, which I've found
>>>> very
>>>> useful.
>>>>
>>>>
>>>>         
>>>>> 3. The interface should be able to work even if no SCSI devices are
>>>>> exported to the kernel.
>>>>>
>>>>>           
>>>> netlink allows this.
>>>>
>>>>
>>>>         
>>>>> 4. Should work seamlessly across vmware and xen kernels.
>>>>>
>>>>>           
>>>> netlink should work here too.
>>>>
>>>> -Brian
>>>>
>>>> --
>>>> Brian King
>>>> Linux on Power Virtualization
>>>> IBM Linux Technology Center
>>>>
>>>>
>>>>         
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>>
>>>       
>
>   

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: Recommended HBA management interfaces
  2009-07-20 19:08         ` James Smart
@ 2009-07-20 20:33           ` Mukker, Atul
  2009-07-21 12:29             ` James Smart
  0 siblings, 1 reply; 13+ messages in thread
From: Mukker, Atul @ 2009-07-20 20:33 UTC (permalink / raw)
  To: James Smart; +Cc: Brian King, linux-scsi

The management protocol involves significant amount of binary data transfer involving multiple applications, so sysfs and friends are not useful for this particular application. But I gather from your (and Brian's) email that mid-layer SG extension should be used for this particular purpose.

As for asynchronous notifications, netlink seems to be the de-facto choice (or it's mid-layer extensions). But didn't you mention earlier, vmware would not support this?

Thanks
Atul


> -----Original Message-----
> From: James Smart [mailto:James.Smart@Emulex.Com]
> Sent: Monday, July 20, 2009 3:09 PM
> To: Mukker, Atul
> Cc: Brian King; linux-scsi@vger.kernel.org
> Subject: Re: Recommended HBA management interfaces
> 
> Mukker, Atul wrote:
> > Thanks for restating my original question.
> >
> > 1. What interface should be used by the HBA management applications to
> obtain (non-generic) information from the HBA?
> >
> My opinions:
> 
> sysfs :
>   Pro: Good for singular data items and simple status  (link state, f/w
> rev, etc).
>          Very good for things that really don't need a tool (simplistic
> admin commands):
>              (show state, reset board, etc).
>   Con: Doesn't work well for "transactions" that need multiple data
> elements
>           Lack of insight to process life cycle, thus multi-step and
> concurrent
>              transactions difficult.
>           Doesn't work with binary data, buffers, etc.
>           Difficult to use concurrently by multiple processes.
>           Can't push async info to user.
>           No support for complex things.
>           The list of attributes can get big. Not a big deal, but...
>           Security based on attribute permissions (not always the best
> model)
> 
> configfs:
>   Pro: Basically sysfs but for transactions with multiple data elements
>   Con: (same as sysfs, just minus multiple data element con).
> 
> netlink:
>   Pro: Very good for "multi-cast" operations - pushing async events to
> multiple
>               receivers.
>          Handles requests and responses with multiple data elements easily.
>          Can track per-process life cycles.
>          Socket based so could even support mgmt from a different machine.
>          Security checking easy to build in.
>   Con: Doesn't work well for large payloads.
>          Payloads can't be referenced via data pointer (they need to be
> inline to the pkt).
>          Direct DMA not supported - has to be staged to driver buffer,
> copied in/out
>             of socket.
>          Multi-step transactions doable, but difficult. Maintaining
> relationships per
>             pid difficult.
>          Multiple machines means dealing with endian-ness and data typing.
>          The netlink sockets do have memory-related issues that must be
> watched.
> 
>    Note: to not burn NETLINK id space, and perhaps collide in different
> distro
>         kernels, please use the mid-layers netlink infrastructure, which
> does allow
>         driver-specific messaging.
> 
> bsg:
>   (Specifically the new midlayer sgio support that was recently added
> for ELS passthru)
>   Pro: Support requests and responses with multiple data elements easily
>          Supports separate request and response DMA-able payload buffers
>          Supports big payloads easily
>   Con: Lack of insight to process lifecycle, thus multi-step and
> concurrent
>              transactions difficult.
>           Async response generation (w/o associated request) very
> difficult.
>           It's really a wrappered ioctl, with the midlayer protecting
> the kernel from
>              bad ioctl practice via the way it converts the sgio ioctl
> into a midlayer
>              request. Creates an odd programming interface, as you
> really want to
>              wrapper the ioctl on the user side too.
> 
> Thus, when you look across the pros and cons, its easy to see why the
> transport
> is using different things for different purposes.
> 
> > 2. How should driver notify such applications of asynchronous events
> happening on the HBA?
> >
> This is already there with the midlayer netlink support.  Vendor-unique
> events
> are already supported.
> 
> > Please keep in mind, all the data transfer between the applications and
> the HBA is a private protocol.
> >
> Private or not, the code for the interface use will have to be in the
> driver.  Code will
> be inspected for proper/safe usage of the interfaces.  Coding such that
> things in the
> messaging are black-boxes will always be a point of contention.
> 
> -- james s
> 
> 
> > Thanks
> > Atul Mukker
> >
> >
> >
> >> -----Original Message-----
> >> From: James Smart [mailto:James.Smart@Emulex.Com]
> >> Sent: Monday, July 20, 2009 12:58 PM
> >> To: Mukker, Atul
> >> Cc: Brian King; linux-scsi@vger.kernel.org
> >> Subject: Re: Recommended HBA management interfaces
> >>
> >> FYI - netlink (and sysfs, and I believe debugfs) do not exist with
> >> vmware drivers...   Additionally, with netlink, many of the distros no
> >> longer include libnl by default in their install images.  Even
> >> interfaces that you think exist on vmware, may have very different
> >> semantical behavior (almost all of the transport stuff either doesn't
> >> exist or is only partially implemented).
> >>
> >> One big caveat I'd give you:  It's not so much the interface being used,
> >> but rather, what are you doing over the interface.  One of the goals of
> >> the community is to present a consistent management paradigm for like
> >> things.  Thus, if what you are doing is generic, you should do it in a
> >> generic manner so that all drivers for like hardware can utilize it.
> >> This was the motivation for the protocol transports. Interestingly,
> even
> >> the transports use different interfaces for different things. It all
> >> depends on what it is.
> >>
> >> Lastly, some things are considered bad practice from a kernel safety
> >> point of view. Example: driver-specific ioctls passing around user-
> space
> >> buffer pointers.  In these cases, it doesn't matter what interface you
> >> pick, they'll be rejected.
> >>
> >> -- james s
> >>
> >>
> >> Mukker, Atul wrote:
> >>
> >>> Thanks Brian. Netlink seems to be appropriate for our purpose as well,
> >>>
> >> almost too good :-)
> >>
> >>> That make me think, what's the catch? The SCSI drivers are not heavy
> >>>
> >> usage of this interface for one.
> >>
> >>> Are the other caveats associated with it?
> >>>
> >>> Best regards,
> >>> Atul Mukker
> >>>
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: Brian King [mailto:brking@linux.vnet.ibm.com]
> >>>> Sent: Friday, July 17, 2009 11:36 AM
> >>>> To: Mukker, Atul
> >>>> Cc: linux-scsi@vger.kernel.org
> >>>> Subject: Re: Recommended HBA management interfaces
> >>>>
> >>>> Mukker, Atul wrote:
> >>>>
> >>>>
> >>>>> Hi All,
> >>>>>
> >>>>> We would like expert comments on the following questions regarding
> >>>>> management of HBA from applications.
> >>>>>
> >>>>> Traditionally, our drivers create a character device node, whose
> >>>>> file_operations are then used by the management applications to
> >>>>> transfer HBA specific commands. In addition to being quirky, this
> >>>>> interface has a few limitations which we would like to remove, most
> >>>>> important being able to seamlessly handle asynchronous events with
> >>>>> data transfer.
> >>>>>
> >>>>> 1. What is (are) the other standard/recommended interfaces which
> >>>>> applications can use to transfer HBA specific commands and data.
> >>>>>
> >>>>>
> >>>> Depends on what the commands look like. With ipr, the commands that
> >>>> the management application need to send to the HBA look sufficiently
> >>>> like SCSI that I was able to report an sg device node for the adapter
> >>>> and use SG_IO to send these commands.
> >>>>
> >>>> sysfs, debugfs, and configfs are options as well.
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>> 2. How should an LLD implement interfaces to transmit asynchronous
> >>>>> information to the management applications? The requirement is to be
> >>>>> able to transmit data buffer as well as notifications for events.
> >>>>>
> >>>>>
> >>>> I've had good success with netlink. In my use I only send a
> >>>>
> >> notification
> >>
> >>>> to userspace and let the application send some commands to figure out
> >>>> what happened, but netlink does allow to send data as well. It makes
> it
> >>>> very
> >>>> easy to have multiple concurrent readers of the data, which I've
> found
> >>>> very
> >>>> useful.
> >>>>
> >>>>
> >>>>
> >>>>> 3. The interface should be able to work even if no SCSI devices are
> >>>>> exported to the kernel.
> >>>>>
> >>>>>
> >>>> netlink allows this.
> >>>>
> >>>>
> >>>>
> >>>>> 4. Should work seamlessly across vmware and xen kernels.
> >>>>>
> >>>>>
> >>>> netlink should work here too.
> >>>>
> >>>> -Brian
> >>>>
> >>>> --
> >>>> Brian King
> >>>> Linux on Power Virtualization
> >>>> IBM Linux Technology Center
> >>>>
> >>>>
> >>>>
> >>> --
> >>> To unsubscribe from this list: send the line "unsubscribe linux-scsi"
> in
> >>> the body of a message to majordomo@vger.kernel.org
> >>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>>
> >>>
> >>>
> >
> >

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Recommended HBA management interfaces
  2009-07-20 20:33           ` Mukker, Atul
@ 2009-07-21 12:29             ` James Smart
  2009-07-21 13:38               ` Mukker, Atul
  2009-07-21 13:48               ` Drew
  0 siblings, 2 replies; 13+ messages in thread
From: James Smart @ 2009-07-21 12:29 UTC (permalink / raw)
  To: Mukker, Atul; +Cc: Brian King, linux-scsi



Mukker, Atul wrote:
> The management protocol involves significant amount of binary data transfer involving multiple applications, so sysfs and friends are not useful for this particular application. But I gather from your (and Brian's) email that mid-layer SG extension should be used for this particular purpose.
> 
> As for asynchronous notifications, netlink seems to be the de-facto choice (or it's mid-layer extensions). But didn't you mention earlier, vmware would not support this?

I answered from a purely linux perspective.  VMware is not Linux. VMware 
attempts to emulate the linux driver/midlayer api's, but emulation is done in 
their own way, with their own semantics, for their own purposes.. Anyone that 
believes they just drop a linux driver into vmware and it works w/o change has 
a screw loose. It may appear to work, and some subsystems may be better than 
others, but there are very subtle and critical differences.  As for all the 
ancillary interfaces such as sysfs, sgio, and transports: a) they have a hard 
time keeping up with the pace of the linux kernel; b) many of the interfaces 
are anti their hypervisor management model. Dependence on user-space utils, 
sysfs, etc doesn't work with a cos-less environment.  Netlink and sockets 
opens up security holes, and hypervisor-level socket support brings all kinds 
of headaches and memory issues. Netlink is not supported. Sysfs isn't 
supported. Even portions of the transports/midlayer are only partially 
implemented.  Unfortunately, vmware interfaces need to be taken up with vmware.

-- james s

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: Recommended HBA management interfaces
  2009-07-21 12:29             ` James Smart
@ 2009-07-21 13:38               ` Mukker, Atul
  2009-07-21 13:48               ` Drew
  1 sibling, 0 replies; 13+ messages in thread
From: Mukker, Atul @ 2009-07-21 13:38 UTC (permalink / raw)
  To: James Smart; +Cc: Brian King, linux-scsi

Thanks again for your inputs. Without going into philosophical details on what and how, we just want to figure out what is (and not) technically possible across various platforms, which you have answered very well.

Best regards,
Atul Mukker


> -----Original Message-----
> From: James Smart [mailto:James.Smart@Emulex.Com]
> Sent: Tuesday, July 21, 2009 8:29 AM
> To: Mukker, Atul
> Cc: Brian King; linux-scsi@vger.kernel.org
> Subject: Re: Recommended HBA management interfaces
> 
> 
> 
> Mukker, Atul wrote:
> > The management protocol involves significant amount of binary data
> transfer involving multiple applications, so sysfs and friends are not
> useful for this particular application. But I gather from your (and
> Brian's) email that mid-layer SG extension should be used for this
> particular purpose.
> >
> > As for asynchronous notifications, netlink seems to be the de-facto
> choice (or it's mid-layer extensions). But didn't you mention earlier,
> vmware would not support this?
> 
> I answered from a purely linux perspective.  VMware is not Linux. VMware
> attempts to emulate the linux driver/midlayer api's, but emulation is done
> in
> their own way, with their own semantics, for their own purposes.. Anyone
> that
> believes they just drop a linux driver into vmware and it works w/o change
> has
> a screw loose. It may appear to work, and some subsystems may be better
> than
> others, but there are very subtle and critical differences.  As for all
> the
> ancillary interfaces such as sysfs, sgio, and transports: a) they have a
> hard
> time keeping up with the pace of the linux kernel; b) many of the
> interfaces
> are anti their hypervisor management model. Dependence on user-space utils,
> sysfs, etc doesn't work with a cos-less environment.  Netlink and sockets
> opens up security holes, and hypervisor-level socket support brings all
> kinds
> of headaches and memory issues. Netlink is not supported. Sysfs isn't
> supported. Even portions of the transports/midlayer are only partially
> implemented.  Unfortunately, vmware interfaces need to be taken up with
> vmware.
> 
> -- james s

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Recommended HBA management interfaces
  2009-07-21 12:29             ` James Smart
  2009-07-21 13:38               ` Mukker, Atul
@ 2009-07-21 13:48               ` Drew
  2009-07-21 13:58                 ` Mukker, Atul
  1 sibling, 1 reply; 13+ messages in thread
From: Drew @ 2009-07-21 13:48 UTC (permalink / raw)
  To: linux-scsi

> VMware attempts to emulate the linux driver/midlayer api's, but emulation is done
> in their own way, with their own semantics, for their own purposes.. Anyone
> that believes they just drop a linux driver into vmware and it works w/o
> change has a screw loose.

I must confess to some confusion James. I thought that VMware emulates
hardware and that outside of optimized drivers they don't get involved
in the higher layers of the guest OS.


-- 
Drew

"Nothing in life is to be feared. It is only to be understood."
--Marie Curie

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: Recommended HBA management interfaces
  2009-07-21 13:48               ` Drew
@ 2009-07-21 13:58                 ` Mukker, Atul
  2009-07-21 14:59                   ` James Smart
  0 siblings, 1 reply; 13+ messages in thread
From: Mukker, Atul @ 2009-07-21 13:58 UTC (permalink / raw)
  To: Drew, linux-scsi

We _are_ talking about the "optimized" driver :-)

> -----Original Message-----
> From: linux-scsi-owner@vger.kernel.org [mailto:linux-scsi-
> owner@vger.kernel.org] On Behalf Of Drew
> Sent: Tuesday, July 21, 2009 9:49 AM
> To: linux-scsi@vger.kernel.org
> Subject: Re: Recommended HBA management interfaces
> 
> > VMware attempts to emulate the linux driver/midlayer api's, but
> emulation is done
> > in their own way, with their own semantics, for their own purposes..
> Anyone
> > that believes they just drop a linux driver into vmware and it works w/o
> > change has a screw loose.
> 
> I must confess to some confusion James. I thought that VMware emulates
> hardware and that outside of optimized drivers they don't get involved
> in the higher layers of the guest OS.
> 
> 
> --
> Drew
> 
> "Nothing in life is to be feared. It is only to be understood."
> --Marie Curie
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Recommended HBA management interfaces
  2009-07-21 13:58                 ` Mukker, Atul
@ 2009-07-21 14:59                   ` James Smart
  2009-07-21 16:27                     ` Drew
  0 siblings, 1 reply; 13+ messages in thread
From: James Smart @ 2009-07-21 14:59 UTC (permalink / raw)
  To: Mukker, Atul; +Cc: Drew, linux-scsi

There's some interesting dichotomies in this question/response:

Drivers exist :
1) Within the ESX kernel for physical hardware, with ESX leveraging the api's 
  from the linux kernel/midlayer/etc in order to pick up linux drivers for 
device support in their kernel/hypervisor.  This was my discussion area. 
Management of this area wants to happen via VMware-based paradigms.

2) Within a guest OS for : a) emulated hardware   or   b) direct pci function 
passthru to the guest os.

3) VMware emulates a LSI adapter as a guest-os-to-hypervisor abstraction point 
(e.g 2a).  Thus LSI drivers could/can exist in both ESX and in the Guest OS 
simultaneously, and if the guest is running linux, both driver instances are 
effectively the linux driver.  Atul/LSI is trying very hard to make the one 
linux driver work in both environments.

Hope this helps.

-- james s


Mukker, Atul wrote:
> We _are_ talking about the "optimized" driver :-)
> 
>> -----Original Message-----
>> From: linux-scsi-owner@vger.kernel.org [mailto:linux-scsi-
>> owner@vger.kernel.org] On Behalf Of Drew
>> Sent: Tuesday, July 21, 2009 9:49 AM
>> To: linux-scsi@vger.kernel.org
>> Subject: Re: Recommended HBA management interfaces
>>
>>> VMware attempts to emulate the linux driver/midlayer api's, but
>> emulation is done
>>> in their own way, with their own semantics, for their own purposes..
>> Anyone
>>> that believes they just drop a linux driver into vmware and it works w/o
>>> change has a screw loose.
>> I must confess to some confusion James. I thought that VMware emulates
>> hardware and that outside of optimized drivers they don't get involved
>> in the higher layers of the guest OS.
>>
>>
>> --
>> Drew
>>
>> "Nothing in life is to be feared. It is only to be understood."
>> --Marie Curie
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Recommended HBA management interfaces
  2009-07-21 14:59                   ` James Smart
@ 2009-07-21 16:27                     ` Drew
  0 siblings, 0 replies; 13+ messages in thread
From: Drew @ 2009-07-21 16:27 UTC (permalink / raw)
  To: James Smart; +Cc: Mukker, Atul, linux-scsi

> 1) Within the ESX kernel for physical hardware, with ESX leveraging the
> api's  from the linux kernel/midlayer/etc in order to pick up linux drivers
> for device support in their kernel/hypervisor.  This was my discussion area.
> Management of this area wants to happen via VMware-based paradigms.

Hi James,

Thanks for the clarification. I've mainly familiar with VMware's
'Server' offering but not ESX, hence the confusion.


-- 
Drew

"Nothing in life is to be feared. It is only to be understood."
--Marie Curie
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2009-07-21 16:27 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-07-17 13:16 Recommended HBA management interfaces Mukker, Atul
2009-07-17 15:35 ` Brian King
2009-07-20 16:28   ` Mukker, Atul
2009-07-20 16:57     ` James Smart
2009-07-20 18:03       ` Mukker, Atul
2009-07-20 19:08         ` James Smart
2009-07-20 20:33           ` Mukker, Atul
2009-07-21 12:29             ` James Smart
2009-07-21 13:38               ` Mukker, Atul
2009-07-21 13:48               ` Drew
2009-07-21 13:58                 ` Mukker, Atul
2009-07-21 14:59                   ` James Smart
2009-07-21 16:27                     ` Drew

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.