All of lore.kernel.org
 help / color / mirror / Atom feed
* cellular modem driver APIs
@ 2019-04-03 21:15 Johannes Berg
  2019-04-04  8:51 ` Bjørn Mork
  0 siblings, 1 reply; 18+ messages in thread
From: Johannes Berg @ 2019-04-03 21:15 UTC (permalink / raw)
  To: netdev
  Cc: Dan Williams, Bjørn Mork, Subash Abhinov Kasiviswanathan,
	Sean Tranchetti

Hi all,

I've been looking at modem drivers, to see what the APIs are to interact
with them, and while I originally thought I had the story sorted out ...
not at all.

Here's the current things we seem to be doing:

  (1) Channels are created/encoded as VLANs (cdc_mbim)

      This is ... strange at best, it requires creating fake ethernet
      headers on the frames, just to be able to have a VLAN tag. If you
      could rely on VLAN acceleration it wouldn't be _so_ bad, but of
      course you can't, so you have to detect an in-band VLAN tag and
      decode/remove it, before taking the VLAN ID into the virtual
      channel number.

      Creating channels is hooked on VLAN operations, which is about the
      only thing that makes sense here?

  (2) Channels are created using sysfs (qmi_wwan)

      This feels almost worse - channels are created using sysfs and
      just *bam* new netdev shows up, no networking APIs are used to
      create them at all, and I suppose you can't even query the channel
      ID for each netdev if you rename them or so. Actually, maybe you
      can in sysfs, not sure I understand the code fully.

  (3) Channels are created using a new link type (rmnet)

      To me this sort of feels the most natural, but this particular
      implementation has at least two issues:

      (a) The implementation is basically driver-specific now, the link
          type is called 'rmnet' etc.
      (b) The bridge enslave thing there is awful.


It seems to me that there really is space here for some common
framework, probably modelled on rmnet - that seems the most reasonable
approach of all three.

The only question I have there is whether the 'netdev model' they all
have actually makes sense. What I mean by that is that they all assume
they have a default channel (using untagged frames, initial netdev,
initial netdev respectively for (1) - (3)).

In 802.11, we don't have such a default channel - you can add/remove
virtual netdevs on the fly. But if you want to do that, then you can't
use IFLA_LINK and the normal link type, which means custom netlink and
custom userspace etc. which, while we do it in wifi, is bothersome.

Here I guess the question would be whether it makes sense to even remove
the default channel, or retag it, or something like that. If no, then to
me it all makes sense to just model rmnet. And even if it *is* something
that could theoretically be done, it seems well possible to me that the
benefits (using rtnl_link_register() etc.) outweigh the deficits of the
approach.


I'm tempted to take a stab at breaking out rmnet_link_ops from the rmnet
driver, somehow giving it an alias of 'wwan-channel' or something like
that, and putting it into some sort of small infrastructure.

Anyone else have any thoughts?

Thanks,
johannes


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-03 21:15 cellular modem driver APIs Johannes Berg
@ 2019-04-04  8:51 ` Bjørn Mork
  2019-04-04  9:00   ` Johannes Berg
  0 siblings, 1 reply; 18+ messages in thread
From: Bjørn Mork @ 2019-04-04  8:51 UTC (permalink / raw)
  To: Johannes Berg
  Cc: netdev, Dan Williams, Subash Abhinov Kasiviswanathan,
	Sean Tranchetti, Daniele Palmas, Aleksander Morgado

Johannes Berg <johannes@sipsolutions.net> writes:

> Hi all,
>
> I've been looking at modem drivers, to see what the APIs are to interact
> with them, and while I originally thought I had the story sorted out ...
> not at all.

Thanks a lot for doing this!  Being responsible for most of the issues
you point out, I can only say that you have my full support if you want
to change any of it.

My pathetic excuses below are just meant to clarify why things are the
way they are.  They are not a defense for status quo ;-)

> Here's the current things we seem to be doing:
>
>   (1) Channels are created/encoded as VLANs (cdc_mbim)
>
>       This is ... strange at best, it requires creating fake ethernet
>       headers on the frames, just to be able to have a VLAN tag. If you
>       could rely on VLAN acceleration it wouldn't be _so_ bad, but of
>       course you can't, so you have to detect an in-band VLAN tag and
>       decode/remove it, before taking the VLAN ID into the virtual
>       channel number.

No, the driver sets NETIF_F_HW_VLAN_CTAG_TX. There is no in-band VLAN
tag for any normal use.  The tag is taken directly from skb metadata and
mapped to the appropriate MBIM session ID.

But this failed when cooking raw frames with an in-line tag using packet
sockets, so I added a fallback to in-line tags for that use case.


>       Creating channels is hooked on VLAN operations, which is about the
>       only thing that makes sense here?

Well, that was why I did this, to avoid requiring som new set of
userspace tools to manage these links.  I looked for some existing tools
for adding virtual netdevs, and I thought I could make VLANs fit the
scheme. 

In hindsight, I should have created a new netlink based API for cellular
modem virtual links instead.  But I don't think it ever struck me as a
choice I had at the time.  I just wasn't experienced enough to realize
how the Linux kernel APIs are developed ;-)


>   (2) Channels are created using sysfs (qmi_wwan)
>
>       This feels almost worse - channels are created using sysfs and
>       just *bam* new netdev shows up, no networking APIs are used to
>       create them at all, and I suppose you can't even query the channel
>       ID for each netdev if you rename them or so. Actually, maybe you
>       can in sysfs, not sure I understand the code fully.

This time I was, and I tried to learn from the MBIM mistake. So I asked
the users (ModemManager developers++), proposing a netlink API as a
possible solution:

https://lists.freedesktop.org/archives/libqmi-devel/2017-January/001900.html

The options I presented were those I saw at the time: VLANs like
cdc_mbim, a new netlink API, or sysfs.  There wasn't much feedback, but
sysfs "won".  So this was a decision made by the users of the API, FWIW.


>   (3) Channels are created using a new link type (rmnet)
>
>       To me this sort of feels the most natural, but this particular
>       implementation has at least two issues:
>
>       (a) The implementation is basically driver-specific now, the link
>           type is called 'rmnet' etc.
>       (b) The bridge enslave thing there is awful.


This driver showed up right after the sysfs based implementation in
qmi_wwan.  Too bad we didn't know about this work then.  I  don't think
anyone would have been interested in the qmi_wwan sysfs thing if we had
known about the plans for this driver.  But what's done is done.


> It seems to me that there really is space here for some common
> framework, probably modelled on rmnet - that seems the most reasonable
> approach of all three.
>
> The only question I have there is whether the 'netdev model' they all
> have actually makes sense. What I mean by that is that they all assume
> they have a default channel (using untagged frames, initial netdev,
> initial netdev respectively for (1) - (3)).

Good question.  I guess the main argument for the 'netdev model' is that
it makes the device directly usable with no extra setup or tools. Most
users won't ever want or need more than one  channel anyway.  They use
the modem for a single IP session.

There is a also an advantage for QMI/RMNET where you can drop the muxing
header when using the default channel only.

> In 802.11, we don't have such a default channel - you can add/remove
> virtual netdevs on the fly. But if you want to do that, then you can't
> use IFLA_LINK and the normal link type, which means custom netlink and
> custom userspace etc. which, while we do it in wifi, is bothersome.

Yes, some of the feedback I've got from the embedded users is that they
don't want any more custom userspace tools. But I'm sure you've heard
that a few times too :-)

> Here I guess the question would be whether it makes sense to even remove
> the default channel, or retag it, or something like that. If no, then to
> me it all makes sense to just model rmnet. And even if it *is* something
> that could theoretically be done, it seems well possible to me that the
> benefits (using rtnl_link_register() etc.) outweigh the deficits of the
> approach.
>
>
> I'm tempted to take a stab at breaking out rmnet_link_ops from the rmnet
> driver, somehow giving it an alias of 'wwan-channel' or something like
> that, and putting it into some sort of small infrastructure.
>
> Anyone else have any thoughts?

I've added Aleksander (ModemManager) and Daniele (qmi_wwan muxing user
and developer) to the CC list.  They are the ones who wold end up using
a possible new API, so they should definitely be part of the discussion



Bjørn

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-04  8:51 ` Bjørn Mork
@ 2019-04-04  9:00   ` Johannes Berg
  2019-04-04 15:52     ` Dan Williams
  2019-04-06 17:20     ` Daniele Palmas
  0 siblings, 2 replies; 18+ messages in thread
From: Johannes Berg @ 2019-04-04  9:00 UTC (permalink / raw)
  To: Bjørn Mork
  Cc: netdev, Dan Williams, Subash Abhinov Kasiviswanathan,
	Sean Tranchetti, Daniele Palmas, Aleksander Morgado

Hi,

> Thanks a lot for doing this!  Being responsible for most of the issues
> you point out, I can only say that you have my full support if you want
> to change any of it.

:-)

> My pathetic excuses below are just meant to clarify why things are the
> way they are.  They are not a defense for status quo ;-)

Thanks!

> > Here's the current things we seem to be doing:
> > 
> >   (1) Channels are created/encoded as VLANs (cdc_mbim)
> > 
> >       This is ... strange at best, it requires creating fake ethernet
> >       headers on the frames, just to be able to have a VLAN tag. If you
> >       could rely on VLAN acceleration it wouldn't be _so_ bad, but of
> >       course you can't, so you have to detect an in-band VLAN tag and
> >       decode/remove it, before taking the VLAN ID into the virtual
> >       channel number.
> 
> No, the driver sets NETIF_F_HW_VLAN_CTAG_TX. There is no in-band VLAN
> tag for any normal use.  The tag is taken directly from skb metadata and
> mapped to the appropriate MBIM session ID.

Right, I saw this.

> But this failed when cooking raw frames with an in-line tag using packet
> sockets, so I added a fallback to in-line tags for that use case.

But this still means that the fallback for in-line has to be supported,
so you can't really fully rely on VLAN acceleration. Maybe my wording
here was incomplete, but I was aware of this.

Nevertheless, it means to replicate this in another driver you don't
just need the VLAN acceleration handling, but also the fallback, so it's
a bunch of extra code.

> >       Creating channels is hooked on VLAN operations, which is about the
> >       only thing that makes sense here?
> 
> Well, that was why I did this, to avoid requiring som new set of
> userspace tools to manage these links.  I looked for some existing tools
> for adding virtual netdevs, and I thought I could make VLANs fit the
> scheme. 

Right.

> In hindsight, I should have created a new netlink based API for cellular
> modem virtual links instead.  But I don't think it ever struck me as a
> choice I had at the time.  I just wasn't experienced enough to realize
> how the Linux kernel APIs are developed ;-)

:-)
And likely really it wasn't all as fleshed out as today with the
plethora of virtual links supported. This seems fairly old.

> 
> >   (2) Channels are created using sysfs (qmi_wwan)
> > 
> >       This feels almost worse - channels are created using sysfs and
> >       just *bam* new netdev shows up, no networking APIs are used to
> >       create them at all, and I suppose you can't even query the channel
> >       ID for each netdev if you rename them or so. Actually, maybe you
> >       can in sysfs, not sure I understand the code fully.
> 
> This time I was, and I tried to learn from the MBIM mistake. So I asked
> the users (ModemManager developers++), proposing a netlink API as a
> possible solution:
> 
> https://lists.freedesktop.org/archives/libqmi-devel/2017-January/001900.html
> 
> The options I presented were those I saw at the time: VLANs like
> cdc_mbim, a new netlink API, or sysfs.  There wasn't much feedback, but
> sysfs "won".  So this was a decision made by the users of the API, FWIW.

Fair point. Dan pointed out that no (default) userspace actually exists
to do this though, and users kinda of have to do it manually - he says
modem manager and libmbim all just use the default channel today. So not
sure they really went on to become users of this ;-)

> >   (3) Channels are created using a new link type (rmnet)
> > 
> >       To me this sort of feels the most natural, but this particular
> >       implementation has at least two issues:
> > 
> >       (a) The implementation is basically driver-specific now, the link
> >           type is called 'rmnet' etc.
> >       (b) The bridge enslave thing there is awful.
> 
> 
> This driver showed up right after the sysfs based implementation in
> qmi_wwan.  Too bad we didn't know about this work then.  I  don't think
> anyone would have been interested in the qmi_wwan sysfs thing if we had
> known about the plans for this driver.  But what's done is done.

Sure.

> > It seems to me that there really is space here for some common
> > framework, probably modelled on rmnet - that seems the most reasonable
> > approach of all three.
> > 
> > The only question I have there is whether the 'netdev model' they all
> > have actually makes sense. What I mean by that is that they all assume
> > they have a default channel (using untagged frames, initial netdev,
> > initial netdev respectively for (1) - (3)).
> 
> Good question.  I guess the main argument for the 'netdev model' is that
> it makes the device directly usable with no extra setup or tools. Most
> users won't ever want or need more than one  channel anyway.  They use
> the modem for a single IP session.

You can do that with both models, really. I mean, with wifi we just
create a single virtual interface by default and you can then go and use
it. But you can also *delete* it later because the underlying
abstraction ("wiphy") doesn't disappear.

This can't be done if you handle the new channel netdevs on top of the
default channel netdev.

> There is a also an advantage for QMI/RMNET where you can drop the muxing
> header when using the default channel only.

That's pretty a pretty internal driver thing though:

 if (tag == default) {
   /* send the frame down without a header */
   return;
 }

no?

> > In 802.11, we don't have such a default channel - you can add/remove
> > virtual netdevs on the fly. But if you want to do that, then you can't
> > use IFLA_LINK and the normal link type, which means custom netlink and
> > custom userspace etc. which, while we do it in wifi, is bothersome.
> 
> Yes, some of the feedback I've got from the embedded users is that they
> don't want any more custom userspace tools. But I'm sure you've heard
> that a few times too :-)

Not really, they have to run wpa_supplicant anyway and that handles it
all, but I hear you.

> > Here I guess the question would be whether it makes sense to even remove
> > the default channel, or retag it, or something like that. If no, then to
> > me it all makes sense to just model rmnet. And even if it *is* something
> > that could theoretically be done, it seems well possible to me that the
> > benefits (using rtnl_link_register() etc.) outweigh the deficits of the
> > approach.
> > 
> > 
> > I'm tempted to take a stab at breaking out rmnet_link_ops from the rmnet
> > driver, somehow giving it an alias of 'wwan-channel' or something like
> > that, and putting it into some sort of small infrastructure.
> > 
> > Anyone else have any thoughts?
> 
> I've added Aleksander (ModemManager) and Daniele (qmi_wwan muxing user
> and developer) to the CC list.  They are the ones who wold end up using
> a possible new API, so they should definitely be part of the discussion

Agree, thanks!

johannes


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-04  9:00   ` Johannes Berg
@ 2019-04-04 15:52     ` Dan Williams
  2019-04-04 19:16       ` Subash Abhinov Kasiviswanathan
  2019-04-06 17:20     ` Daniele Palmas
  1 sibling, 1 reply; 18+ messages in thread
From: Dan Williams @ 2019-04-04 15:52 UTC (permalink / raw)
  To: Johannes Berg, Bjørn Mork
  Cc: netdev, Subash Abhinov Kasiviswanathan, Sean Tranchetti,
	Daniele Palmas, Aleksander Morgado

On Thu, 2019-04-04 at 11:00 +0200, Johannes Berg wrote:
> Hi,
> 
> > Thanks a lot for doing this!  Being responsible for most of the
> > issues
> > you point out, I can only say that you have my full support if you
> > want
> > to change any of it.
> 
> :-)
> 
> > My pathetic excuses below are just meant to clarify why things are
> > the
> > way they are.  They are not a defense for status quo ;-)
> 
> Thanks!
> 
> > > Here's the current things we seem to be doing:
> > > 
> > >   (1) Channels are created/encoded as VLANs (cdc_mbim)
> > > 
> > >       This is ... strange at best, it requires creating fake
> > > ethernet
> > >       headers on the frames, just to be able to have a VLAN tag.
> > > If you
> > >       could rely on VLAN acceleration it wouldn't be _so_ bad,
> > > but of
> > >       course you can't, so you have to detect an in-band VLAN tag
> > > and
> > >       decode/remove it, before taking the VLAN ID into the
> > > virtual
> > >       channel number.
> > 
> > No, the driver sets NETIF_F_HW_VLAN_CTAG_TX. There is no in-band
> > VLAN
> > tag for any normal use.  The tag is taken directly from skb
> > metadata and
> > mapped to the appropriate MBIM session ID.
> 
> Right, I saw this.
> 
> > But this failed when cooking raw frames with an in-line tag using
> > packet
> > sockets, so I added a fallback to in-line tags for that use case.
> 
> But this still means that the fallback for in-line has to be
> supported,
> so you can't really fully rely on VLAN acceleration. Maybe my wording
> here was incomplete, but I was aware of this.
> 
> Nevertheless, it means to replicate this in another driver you don't
> just need the VLAN acceleration handling, but also the fallback, so
> it's
> a bunch of extra code.
> 
> > >       Creating channels is hooked on VLAN operations, which is
> > > about the
> > >       only thing that makes sense here?
> > 
> > Well, that was why I did this, to avoid requiring som new set of
> > userspace tools to manage these links.  I looked for some existing
> > tools
> > for adding virtual netdevs, and I thought I could make VLANs fit
> > the
> > scheme. 
> 
> Right.
> 
> > In hindsight, I should have created a new netlink based API for
> > cellular
> > modem virtual links instead.  But I don't think it ever struck me
> > as a
> > choice I had at the time.  I just wasn't experienced enough to
> > realize
> > how the Linux kernel APIs are developed ;-)
> 
> :-)
> And likely really it wasn't all as fleshed out as today with the
> plethora of virtual links supported. This seems fairly old.
> 
> > >   (2) Channels are created using sysfs (qmi_wwan)
> > > 
> > >       This feels almost worse - channels are created using sysfs
> > > and
> > >       just *bam* new netdev shows up, no networking APIs are used
> > > to
> > >       create them at all, and I suppose you can't even query the
> > > channel
> > >       ID for each netdev if you rename them or so. Actually,
> > > maybe you
> > >       can in sysfs, not sure I understand the code fully.
> > 
> > This time I was, and I tried to learn from the MBIM mistake. So I
> > asked
> > the users (ModemManager developers++), proposing a netlink API as a
> > possible solution:
> > 
> > https://lists.freedesktop.org/archives/libqmi-devel/2017-January/001900.html
> > 
> > The options I presented were those I saw at the time: VLANs like
> > cdc_mbim, a new netlink API, or sysfs.  There wasn't much feedback,
> > but
> > sysfs "won".  So this was a decision made by the users of the API,
> > FWIW.
> 
> Fair point. Dan pointed out that no (default) userspace actually
> exists
> to do this though, and users kinda of have to do it manually - he
> says
> modem manager and libmbim all just use the default channel today. So
> not
> sure they really went on to become users of this ;-)

To be clear, ModemManager doesn't (yet) make use of multiple IP
channels. But libmbim supports it with 'mbimcli --connect="session-
id=4,apn=XXXXX"' and then you'd add VLAN 4 onto the mbim netdev and
theoretically things would work :)  Bjorn would have the details
though.

libmbim really doesn't care about the extra netdevs or channels itself
since it doesn't care about the data plane (nor does it need to at this
time).

Dan

> > >   (3) Channels are created using a new link type (rmnet)
> > > 
> > >       To me this sort of feels the most natural, but this
> > > particular
> > >       implementation has at least two issues:
> > > 
> > >       (a) The implementation is basically driver-specific now,
> > > the link
> > >           type is called 'rmnet' etc.
> > >       (b) The bridge enslave thing there is awful.
> > 
> > This driver showed up right after the sysfs based implementation in
> > qmi_wwan.  Too bad we didn't know about this work then.  I  don't
> > think
> > anyone would have been interested in the qmi_wwan sysfs thing if we
> > had
> > known about the plans for this driver.  But what's done is done.
> 
> Sure.
> 
> > > It seems to me that there really is space here for some common
> > > framework, probably modelled on rmnet - that seems the most
> > > reasonable
> > > approach of all three.
> > > 
> > > The only question I have there is whether the 'netdev model' they
> > > all
> > > have actually makes sense. What I mean by that is that they all
> > > assume
> > > they have a default channel (using untagged frames, initial
> > > netdev,
> > > initial netdev respectively for (1) - (3)).
> > 
> > Good question.  I guess the main argument for the 'netdev model' is
> > that
> > it makes the device directly usable with no extra setup or tools.
> > Most
> > users won't ever want or need more than one  channel anyway.  They
> > use
> > the modem for a single IP session.
> 
> You can do that with both models, really. I mean, with wifi we just
> create a single virtual interface by default and you can then go and
> use
> it. But you can also *delete* it later because the underlying
> abstraction ("wiphy") doesn't disappear.
> 
> This can't be done if you handle the new channel netdevs on top of
> the
> default channel netdev.
> 
> > There is a also an advantage for QMI/RMNET where you can drop the
> > muxing
> > header when using the default channel only.
> 
> That's pretty a pretty internal driver thing though:
> 
>  if (tag == default) {
>    /* send the frame down without a header */
>    return;
>  }
> 
> no?
> 
> > > In 802.11, we don't have such a default channel - you can
> > > add/remove
> > > virtual netdevs on the fly. But if you want to do that, then you
> > > can't
> > > use IFLA_LINK and the normal link type, which means custom
> > > netlink and
> > > custom userspace etc. which, while we do it in wifi, is
> > > bothersome.
> > 
> > Yes, some of the feedback I've got from the embedded users is that
> > they
> > don't want any more custom userspace tools. But I'm sure you've
> > heard
> > that a few times too :-)
> 
> Not really, they have to run wpa_supplicant anyway and that handles
> it
> all, but I hear you.
> 
> > > Here I guess the question would be whether it makes sense to even
> > > remove
> > > the default channel, or retag it, or something like that. If no,
> > > then to
> > > me it all makes sense to just model rmnet. And even if it *is*
> > > something
> > > that could theoretically be done, it seems well possible to me
> > > that the
> > > benefits (using rtnl_link_register() etc.) outweigh the deficits
> > > of the
> > > approach.
> > > 
> > > 
> > > I'm tempted to take a stab at breaking out rmnet_link_ops from
> > > the rmnet
> > > driver, somehow giving it an alias of 'wwan-channel' or something
> > > like
> > > that, and putting it into some sort of small infrastructure.
> > > 
> > > Anyone else have any thoughts?
> > 
> > I've added Aleksander (ModemManager) and Daniele (qmi_wwan muxing
> > user
> > and developer) to the CC list.  They are the ones who wold end up
> > using
> > a possible new API, so they should definitely be part of the
> > discussion
> 
> Agree, thanks!
> 
> johannes
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-04 15:52     ` Dan Williams
@ 2019-04-04 19:16       ` Subash Abhinov Kasiviswanathan
  2019-04-04 20:38         ` Johannes Berg
  0 siblings, 1 reply; 18+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2019-04-04 19:16 UTC (permalink / raw)
  To: Dan Williams, Johannes Berg, Bjørn Mork
  Cc: netdev, Sean Tranchetti, Daniele Palmas, Aleksander Morgado

On 2019-04-04 09:52, Dan Williams wrote:
> On Thu, 2019-04-04 at 11:00 +0200, Johannes Berg wrote:
>> Hi,
>> 
>> > Thanks a lot for doing this!  Being responsible for most of the
>> > issues
>> > you point out, I can only say that you have my full support if you
>> > want
>> > to change any of it.
>> 
>> :-)
>> 
>> > My pathetic excuses below are just meant to clarify why things are
>> > the
>> > way they are.  They are not a defense for status quo ;-)
>> 
>> Thanks!
>> 
>> > > Here's the current things we seem to be doing:
>> > >
>> > >   (1) Channels are created/encoded as VLANs (cdc_mbim)
>> > >
>> > >       This is ... strange at best, it requires creating fake
>> > > ethernet
>> > >       headers on the frames, just to be able to have a VLAN tag.
>> > > If you
>> > >       could rely on VLAN acceleration it wouldn't be _so_ bad,
>> > > but of
>> > >       course you can't, so you have to detect an in-band VLAN tag
>> > > and
>> > >       decode/remove it, before taking the VLAN ID into the
>> > > virtual
>> > >       channel number.
>> >
>> > No, the driver sets NETIF_F_HW_VLAN_CTAG_TX. There is no in-band
>> > VLAN
>> > tag for any normal use.  The tag is taken directly from skb
>> > metadata and
>> > mapped to the appropriate MBIM session ID.
>> 
>> Right, I saw this.
>> 
>> > But this failed when cooking raw frames with an in-line tag using
>> > packet
>> > sockets, so I added a fallback to in-line tags for that use case.
>> 
>> But this still means that the fallback for in-line has to be
>> supported,
>> so you can't really fully rely on VLAN acceleration. Maybe my wording
>> here was incomplete, but I was aware of this.
>> 
>> Nevertheless, it means to replicate this in another driver you don't
>> just need the VLAN acceleration handling, but also the fallback, so
>> it's
>> a bunch of extra code.
>> 
>> > >       Creating channels is hooked on VLAN operations, which is
>> > > about the
>> > >       only thing that makes sense here?
>> >
>> > Well, that was why I did this, to avoid requiring som new set of
>> > userspace tools to manage these links.  I looked for some existing
>> > tools
>> > for adding virtual netdevs, and I thought I could make VLANs fit
>> > the
>> > scheme.
>> 
>> Right.
>> 
>> > In hindsight, I should have created a new netlink based API for
>> > cellular
>> > modem virtual links instead.  But I don't think it ever struck me
>> > as a
>> > choice I had at the time.  I just wasn't experienced enough to
>> > realize
>> > how the Linux kernel APIs are developed ;-)
>> 
>> :-)
>> And likely really it wasn't all as fleshed out as today with the
>> plethora of virtual links supported. This seems fairly old.
>> 
>> > >   (2) Channels are created using sysfs (qmi_wwan)
>> > >
>> > >       This feels almost worse - channels are created using sysfs
>> > > and
>> > >       just *bam* new netdev shows up, no networking APIs are used
>> > > to
>> > >       create them at all, and I suppose you can't even query the
>> > > channel
>> > >       ID for each netdev if you rename them or so. Actually,
>> > > maybe you
>> > >       can in sysfs, not sure I understand the code fully.
>> >
>> > This time I was, and I tried to learn from the MBIM mistake. So I
>> > asked
>> > the users (ModemManager developers++), proposing a netlink API as a
>> > possible solution:
>> >
>> > https://lists.freedesktop.org/archives/libqmi-devel/2017-January/001900.html
>> >
>> > The options I presented were those I saw at the time: VLANs like
>> > cdc_mbim, a new netlink API, or sysfs.  There wasn't much feedback,
>> > but
>> > sysfs "won".  So this was a decision made by the users of the API,
>> > FWIW.
>> 
>> Fair point. Dan pointed out that no (default) userspace actually
>> exists
>> to do this though, and users kinda of have to do it manually - he
>> says
>> modem manager and libmbim all just use the default channel today. So
>> not
>> sure they really went on to become users of this ;-)
> 
> To be clear, ModemManager doesn't (yet) make use of multiple IP
> channels. But libmbim supports it with 'mbimcli --connect="session-
> id=4,apn=XXXXX"' and then you'd add VLAN 4 onto the mbim netdev and
> theoretically things would work :)  Bjorn would have the details
> though.
> 
> libmbim really doesn't care about the extra netdevs or channels itself
> since it doesn't care about the data plane (nor does it need to at this
> time).
> 
> Dan
> 
>> > >   (3) Channels are created using a new link type (rmnet)
>> > >
>> > >       To me this sort of feels the most natural, but this
>> > > particular
>> > >       implementation has at least two issues:
>> > >
>> > >       (a) The implementation is basically driver-specific now,
>> > > the link
>> > >           type is called 'rmnet' etc.
>> > >       (b) The bridge enslave thing there is awful.
>> >

Hi

The normal mode of operation of rmnet is using the rmnet netdevices
in an embedded device.

The bridge mode is used only for testing by sending frames
without de-muxing to some other driver such as a USB netdev so packets
can be parsed on a tethered PC.

>> > This driver showed up right after the sysfs based implementation in
>> > qmi_wwan.  Too bad we didn't know about this work then.  I  don't
>> > think
>> > anyone would have been interested in the qmi_wwan sysfs thing if we
>> > had
>> > known about the plans for this driver.  But what's done is done.
>> 
>> Sure.
>> 

I was planning to refactor qmi_wwan to reuse rmnet as much as possible.
Unfortunately, I wasn't able to get qmi_wwan / modem manager
configured and never got to this.

>> > > It seems to me that there really is space here for some common
>> > > framework, probably modelled on rmnet - that seems the most
>> > > reasonable
>> > > approach of all three.
>> > >
>> > > The only question I have there is whether the 'netdev model' they
>> > > all
>> > > have actually makes sense. What I mean by that is that they all
>> > > assume
>> > > they have a default channel (using untagged frames, initial
>> > > netdev,
>> > > initial netdev respectively for (1) - (3)).
>> >
>> > Good question.  I guess the main argument for the 'netdev model' is
>> > that
>> > it makes the device directly usable with no extra setup or tools.
>> > Most
>> > users won't ever want or need more than one  channel anyway.  They
>> > use
>> > the modem for a single IP session.
>> 
>> You can do that with both models, really. I mean, with wifi we just
>> create a single virtual interface by default and you can then go and
>> use
>> it. But you can also *delete* it later because the underlying
>> abstraction ("wiphy") doesn't disappear.
>> 
>> This can't be done if you handle the new channel netdevs on top of
>> the
>> default channel netdev.
>> 
>> > There is a also an advantage for QMI/RMNET where you can drop the
>> > muxing
>> > header when using the default channel only.
>> 
>> That's pretty a pretty internal driver thing though:
>> 
>>  if (tag == default) {
>>    /* send the frame down without a header */
>>    return;
>>  }
>> 
>> no?
>> 

Having a single channel is not common so rmnet doesn't support a default
channel mode. Most of the uses cases we have require multiple PDNs so
this translates to multiple rmnet devices.

>> > > In 802.11, we don't have such a default channel - you can
>> > > add/remove
>> > > virtual netdevs on the fly. But if you want to do that, then you
>> > > can't
>> > > use IFLA_LINK and the normal link type, which means custom
>> > > netlink and
>> > > custom userspace etc. which, while we do it in wifi, is
>> > > bothersome.
>> >
>> > Yes, some of the feedback I've got from the embedded users is that
>> > they
>> > don't want any more custom userspace tools. But I'm sure you've
>> > heard
>> > that a few times too :-)
>> 
>> Not really, they have to run wpa_supplicant anyway and that handles
>> it
>> all, but I hear you.
>> 
>> > > Here I guess the question would be whether it makes sense to even
>> > > remove
>> > > the default channel, or retag it, or something like that. If no,
>> > > then to
>> > > me it all makes sense to just model rmnet. And even if it *is*
>> > > something
>> > > that could theoretically be done, it seems well possible to me
>> > > that the
>> > > benefits (using rtnl_link_register() etc.) outweigh the deficits
>> > > of the
>> > > approach.
>> > >
>> > >
>> > > I'm tempted to take a stab at breaking out rmnet_link_ops from
>> > > the rmnet
>> > > driver, somehow giving it an alias of 'wwan-channel' or something
>> > > like
>> > > that, and putting it into some sort of small infrastructure.
>> > >
>> > > Anyone else have any thoughts?
>> >
>> > I've added Aleksander (ModemManager) and Daniele (qmi_wwan muxing
>> > user
>> > and developer) to the CC list.  They are the ones who wold end up
>> > using
>> > a possible new API, so they should definitely be part of the
>> > discussion
>> 
>> Agree, thanks!
>> 
>> johannes
>> 

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-04 19:16       ` Subash Abhinov Kasiviswanathan
@ 2019-04-04 20:38         ` Johannes Berg
  2019-04-04 21:00           ` Johannes Berg
  2019-04-05  4:45           ` Subash Abhinov Kasiviswanathan
  0 siblings, 2 replies; 18+ messages in thread
From: Johannes Berg @ 2019-04-04 20:38 UTC (permalink / raw)
  To: Subash Abhinov Kasiviswanathan, Dan Williams, Bjørn Mork
  Cc: netdev, Sean Tranchetti, Daniele Palmas, Aleksander Morgado

Hi,

> The normal mode of operation of rmnet is using the rmnet netdevices
> in an embedded device.

Sure. Can you say what driver this would typically live on top of? I'm
actually a bit surprised to find out this isn't really a driver :-)

In my view right now, I'd recommend splitting rmnet into two pieces:

 1) The netlink API, be it called "rmnet" or something else, probably
    I'd call it something else like "wwan-mux" or so and leave "rmnet"
    as an alias.
    This part is certainly going to be generic. I'm not sure where
    exactly to draw the line here, but I'd rather start out drawing it
    more over to the API side (i.e. keep it just the API) instead of
    including some of the RX handlers etc.

 2) The actual layering protocol implementation, with things like
    rmnet_map_ingress_handler() and rmnet_map_add_map_header(),
    basically all the stuff handling MAP headers.
    This is clearly device specific, but I wonder what device.

> The bridge mode is used only for testing by sending frames
> without de-muxing to some other driver such as a USB netdev so packets
> can be parsed on a tethered PC.

Yeah, I get it, it's just done in a strange way. You'd think adding a
tcpdump or some small application that just resends the packets directly
received from the underlying "real_dev" using a ptype_all socket would
be sufficient? Though perhaps not quite the same performance, but then
you could easily not use an application but a dev_add_pack() thing? Or
probably even tc's mirred?

> I was planning to refactor qmi_wwan to reuse rmnet as much as possible.
> Unfortunately, I wasn't able to get qmi_wwan / modem manager
> configured and never got to this.

OK.

> Having a single channel is not common

Depends what kind of system you're looking at, but I hear you :)

> so rmnet doesn't support a default
> channel mode. Most of the uses cases we have require multiple PDNs so
> this translates to multiple rmnet devices.

Right, sure. I just wonder then what happens with the underlying netdev.
Is that just dead?

In fact, is the underlying netdev even usable without having rmnet on
top? If not, why is it even a netdev?

Like I said, I'm thinking that the whole wiphy/vif abstraction makes a
bit more sense here too, like in wifi, although it requires all new
userspace or at least not using IFLA_LINK but something else ...

Then again, I suppose we could have an IFLA_WWAN_DEVICE and have that be
something else, just like I suppose we could create a wifi virtual
device by having IFLA_WIPHY and using rtnl_link_ops, just didn't do it.

Would or wouldn't that make more sense?

We can still create a default channel netdev on top ...



My gut feeling is that it would in fact make more sense.



So, straw man proposal:

 * create net/wwan/ with some API like
    - register_wwan_device()/unregister_wwan_device()
    - struct wwan_device_ops {
          int (*wdo_add_channel)(struct wwan_device *dev,
                                 struct wwan_channel_cfg *cfg);
          int (*wdo_rm_channel)(...);
          /* ... */
      }
 * implement there a rtnl_link_ops that can create new links but needs
   the wwan device (IFLA_WWAN) instead of underlying netdev (IFLA_LINK)
 * strip the rtnl_link_ops from rmnet and use that infra instead

Now, OTOH, this loses a bunch of benefits. We may want to be able to use
ethtool to flash a modem, start tcpdump on the underlying netdev
directly to see everything, etc.?

So the alternative would be to still have the underlying wwan device
represent itself as a netdev. But is that the right thing to do when
that never really transports data frames in the real use cases?

For wifi we actually had this ('master' netdev), but we decided to get
rid of it, the abstraction just didn't make sense, and the frames moving
through those queues etc. also didn't really make sense. But the
downside is building more APIs here, and for some parts also custom
userspace.

Then again, we could also have a 'monitor wwan' device type, right? You
say you don't add a specific channel, but a sniffer, and then you can
use tcpdump etc. on the sniffer netdev. Same in wireless, really.


So anyway. Can you say what rmnet is used on top of, and what use the
underlying netdev is when you say it's not really of much use since you
need multiple channels?

johannes


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-04 20:38         ` Johannes Berg
@ 2019-04-04 21:00           ` Johannes Berg
  2019-04-05  4:45           ` Subash Abhinov Kasiviswanathan
  1 sibling, 0 replies; 18+ messages in thread
From: Johannes Berg @ 2019-04-04 21:00 UTC (permalink / raw)
  To: Subash Abhinov Kasiviswanathan, Dan Williams, Bjørn Mork
  Cc: netdev, Sean Tranchetti, Daniele Palmas, Aleksander Morgado

On Thu, 2019-04-04 at 22:38 +0200, Johannes Berg wrote:

> > The bridge mode is used only for testing by sending frames
> > without de-muxing to some other driver such as a USB netdev so packets
> > can be parsed on a tethered PC.
> 
> Yeah, I get it, it's just done in a strange way. You'd think adding a
> tcpdump or some small application that just resends the packets directly
> received from the underlying "real_dev" using a ptype_all socket would
> be sufficient? Though perhaps not quite the same performance, but then
> you could easily not use an application but a dev_add_pack() thing? Or
> probably even tc's mirred?

And to extend that thought, tc's ife action would let you encapsulate
the things you have in ethernet headers... I think.

johannes


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-04 20:38         ` Johannes Berg
  2019-04-04 21:00           ` Johannes Berg
@ 2019-04-05  4:45           ` Subash Abhinov Kasiviswanathan
  2019-04-06 17:22             ` Daniele Palmas
  2019-04-08 19:49             ` Johannes Berg
  1 sibling, 2 replies; 18+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2019-04-05  4:45 UTC (permalink / raw)
  To: Johannes Berg
  Cc: Dan Williams, Bjørn Mork, netdev, Sean Tranchetti,
	Daniele Palmas, Aleksander Morgado

On 2019-04-04 14:38, Johannes Berg wrote:
> Hi,
> 
>> The normal mode of operation of rmnet is using the rmnet netdevices
>> in an embedded device.
> 
> Sure. Can you say what driver this would typically live on top of? I'm
> actually a bit surprised to find out this isn't really a driver :-)
> 

This needs a physical net device such as IP accelerator
https://lkml.org/lkml/2018/11/7/233 or Modem host interface
https://lkml.org/lkml/2018/4/26/1159

I recall Daniele also managed to get rmnet working with qmi_wwan
(with an additional patch in which I had made qmi_wwan a passthrough)

> In my view right now, I'd recommend splitting rmnet into two pieces:
> 
>  1) The netlink API, be it called "rmnet" or something else, probably
>     I'd call it something else like "wwan-mux" or so and leave "rmnet"
>     as an alias.
>     This part is certainly going to be generic. I'm not sure where
>     exactly to draw the line here, but I'd rather start out drawing it
>     more over to the API side (i.e. keep it just the API) instead of
>     including some of the RX handlers etc.
> 
>  2) The actual layering protocol implementation, with things like
>     rmnet_map_ingress_handler() and rmnet_map_add_map_header(),
>     basically all the stuff handling MAP headers.
>     This is clearly device specific, but I wonder what device.
> 
>> The bridge mode is used only for testing by sending frames
>> without de-muxing to some other driver such as a USB netdev so packets
>> can be parsed on a tethered PC.
> 
> Yeah, I get it, it's just done in a strange way. You'd think adding a
> tcpdump or some small application that just resends the packets 
> directly
> received from the underlying "real_dev" using a ptype_all socket would
> be sufficient? Though perhaps not quite the same performance, but then
> you could easily not use an application but a dev_add_pack() thing? Or
> probably even tc's mirred?
> 
> And to extend that thought, tc's ife action would let you encapsulate
> the things you have in ethernet headers... I think.

We need raw IP frames from a embedded device transmitted to a PC
and vice versa.

>> I was planning to refactor qmi_wwan to reuse rmnet as much as 
>> possible.
>> Unfortunately, I wasn't able to get qmi_wwan / modem manager
>> configured and never got to this.
> 
> OK.
> 
>> Having a single channel is not common
> 
> Depends what kind of system you're looking at, but I hear you :)
> 
>> so rmnet doesn't support a default
>> channel mode. Most of the uses cases we have require multiple PDNs so
>> this translates to multiple rmnet devices.
> 
> Right, sure. I just wonder then what happens with the underlying 
> netdev.
> Is that just dead?
> 
> In fact, is the underlying netdev even usable without having rmnet on
> top? If not, why is it even a netdev?
> 

Yes, the underlying netdev itself cannot do much on its own as network
stack wont be able to decipher the muxed frames.

The operation of rmnet was to be agnostic of the underlying driver.
The netdev model was chosen for it since it was easy to have a 
rx_handler
attach to the netdevice exposed by any of those drivers.

> Like I said, I'm thinking that the whole wiphy/vif abstraction makes a
> bit more sense here too, like in wifi, although it requires all new
> userspace or at least not using IFLA_LINK but something else ...
> 
> Then again, I suppose we could have an IFLA_WWAN_DEVICE and have that 
> be
> something else, just like I suppose we could create a wifi virtual
> device by having IFLA_WIPHY and using rtnl_link_ops, just didn't do it.
> 
> Would or wouldn't that make more sense?
> 
> We can still create a default channel netdev on top ...
> 
> 
> 
> My gut feeling is that it would in fact make more sense.
> 
> 
> 
> So, straw man proposal:
> 
>  * create net/wwan/ with some API like
>     - register_wwan_device()/unregister_wwan_device()
>     - struct wwan_device_ops {
>           int (*wdo_add_channel)(struct wwan_device *dev,
>                                  struct wwan_channel_cfg *cfg);
>           int (*wdo_rm_channel)(...);
>           /* ... */
>       }
>  * implement there a rtnl_link_ops that can create new links but needs
>    the wwan device (IFLA_WWAN) instead of underlying netdev (IFLA_LINK)
>  * strip the rtnl_link_ops from rmnet and use that infra instead
> 
> Now, OTOH, this loses a bunch of benefits. We may want to be able to 
> use
> ethtool to flash a modem, start tcpdump on the underlying netdev
> directly to see everything, etc.?
> 

Yes, we use that underlying netdev to view the muxed raw IP frames in 
tcpdump.

> So the alternative would be to still have the underlying wwan device
> represent itself as a netdev. But is that the right thing to do when
> that never really transports data frames in the real use cases?
> 
> For wifi we actually had this ('master' netdev), but we decided to get
> rid of it, the abstraction just didn't make sense, and the frames 
> moving
> through those queues etc. also didn't really make sense. But the
> downside is building more APIs here, and for some parts also custom
> userspace.
> 
> Then again, we could also have a 'monitor wwan' device type, right? You
> say you don't add a specific channel, but a sniffer, and then you can
> use tcpdump etc. on the sniffer netdev. Same in wireless, really.
> 

Can you tell how this sniffer netdev works? If rmnet needed to create a
separate netdev for monitoring the muxed raw IP frames, wouldnt that
just serve the same purpose as that of underlying netdev.

> 
> So anyway. Can you say what rmnet is used on top of, and what use the
> underlying netdev is when you say it's not really of much use since you
> need multiple channels?
> 
> johannes

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-04  9:00   ` Johannes Berg
  2019-04-04 15:52     ` Dan Williams
@ 2019-04-06 17:20     ` Daniele Palmas
  2019-04-08 19:51       ` Johannes Berg
  1 sibling, 1 reply; 18+ messages in thread
From: Daniele Palmas @ 2019-04-06 17:20 UTC (permalink / raw)
  To: Johannes Berg
  Cc: Bjørn Mork, netdev, Dan Williams,
	Subash Abhinov Kasiviswanathan, Sean Tranchetti,
	Aleksander Morgado

Hi,

Il giorno gio 4 apr 2019 alle ore 11:00 Johannes Berg
<johannes@sipsolutions.net> ha scritto:

> > >   (2) Channels are created using sysfs (qmi_wwan)
> > >
> > >       This feels almost worse - channels are created using sysfs and
> > >       just *bam* new netdev shows up, no networking APIs are used to
> > >       create them at all, and I suppose you can't even query the channel
> > >       ID for each netdev if you rename them or so. Actually, maybe you
> > >       can in sysfs, not sure I understand the code fully.
> >
> > This time I was, and I tried to learn from the MBIM mistake. So I asked
> > the users (ModemManager developers++), proposing a netlink API as a
> > possible solution:
> >
> > https://lists.freedesktop.org/archives/libqmi-devel/2017-January/001900.html
> >
> > The options I presented were those I saw at the time: VLANs like
> > cdc_mbim, a new netlink API, or sysfs.  There wasn't much feedback, but
> > sysfs "won".  So this was a decision made by the users of the API, FWIW.
>
> Fair point. Dan pointed out that no (default) userspace actually exists
> to do this though, and users kinda of have to do it manually - he says
> modem manager and libmbim all just use the default channel today. So not
> sure they really went on to become users of this ;-)
>

the qmi_wwan sysfs qmap feature, being very easy to use, is serving
well for me and customers of the company I work for (mainly directly
with libqmi, not ModemManager), but I understand the need to have a
common framework and will gladly test and provide feedback for any new
development related to this.

Regards,
Daniele

>
> johannes
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-05  4:45           ` Subash Abhinov Kasiviswanathan
@ 2019-04-06 17:22             ` Daniele Palmas
  2019-04-08 19:49             ` Johannes Berg
  1 sibling, 0 replies; 18+ messages in thread
From: Daniele Palmas @ 2019-04-06 17:22 UTC (permalink / raw)
  To: Subash Abhinov Kasiviswanathan
  Cc: Johannes Berg, Dan Williams, Bjørn Mork, netdev,
	Sean Tranchetti, Aleksander Morgado

Hi,

Il giorno ven 5 apr 2019 alle ore 06:45 Subash Abhinov Kasiviswanathan
<subashab@codeaurora.org> ha scritto:
>
> On 2019-04-04 14:38, Johannes Berg wrote:
> > Hi,
> >
> >> The normal mode of operation of rmnet is using the rmnet netdevices
> >> in an embedded device.
> >
> > Sure. Can you say what driver this would typically live on top of? I'm
> > actually a bit surprised to find out this isn't really a driver :-)
> >
>
> This needs a physical net device such as IP accelerator
> https://lkml.org/lkml/2018/11/7/233 or Modem host interface
> https://lkml.org/lkml/2018/4/26/1159
>
> I recall Daniele also managed to get rmnet working with qmi_wwan
> (with an additional patch in which I had made qmi_wwan a passthrough)
>

confirmed, it was working fine.

Regards,
Daniele

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-05  4:45           ` Subash Abhinov Kasiviswanathan
  2019-04-06 17:22             ` Daniele Palmas
@ 2019-04-08 19:49             ` Johannes Berg
  2019-04-11  3:54               ` Subash Abhinov Kasiviswanathan
  1 sibling, 1 reply; 18+ messages in thread
From: Johannes Berg @ 2019-04-08 19:49 UTC (permalink / raw)
  To: Subash Abhinov Kasiviswanathan
  Cc: Dan Williams, Bjørn Mork, netdev, Sean Tranchetti,
	Daniele Palmas, Aleksander Morgado

On Thu, 2019-04-04 at 22:45 -0600, Subash Abhinov Kasiviswanathan wrote:
> On 2019-04-04 14:38, Johannes Berg wrote:
> > Hi,
> > 
> > > The normal mode of operation of rmnet is using the rmnet netdevices
> > > in an embedded device.
> > 
> > Sure. Can you say what driver this would typically live on top of? I'm
> > actually a bit surprised to find out this isn't really a driver :-)
> > 
> 
> This needs a physical net device such as IP accelerator
> https://lkml.org/lkml/2018/11/7/233 or Modem host interface
> https://lkml.org/lkml/2018/4/26/1159

OK. But it means that you have a very specific encapsulation mode on top
of the "netdev". I'm still not convinced we should actually make that a
netdev, but I'll elaborate elsewhere.

> I recall Daniele also managed to get rmnet working with qmi_wwan
> (with an additional patch in which I had made qmi_wwan a passthrough)

I guess that uses the same encapsulation then, yes, I see it now:
qmi_wwan's struct qmimux_hdr and rmnet's struct rmnet_map_header are
very similar.

Btw, I see that struct rmnet_map_header uses a bitfield - this seems to
go down to the device so probably will not work right on both big and
little endian.

> > Yeah, I get it, it's just done in a strange way. You'd think adding a
> > tcpdump or some small application that just resends the packets 
> > directly
> > received from the underlying "real_dev" using a ptype_all socket would
> > be sufficient? Though perhaps not quite the same performance, but then
> > you could easily not use an application but a dev_add_pack() thing? Or
> > probably even tc's mirred?
> > 
> > And to extend that thought, tc's ife action would let you encapsulate
> > the things you have in ethernet headers... I think.
> 
> We need raw IP frames from a embedded device transmitted to a PC
> and vice versa.

Sure. But you just need to encap them in some kind of ethernet frame to
transport them on the wire, but don't really need to care much about
how.

> Yes, the underlying netdev itself cannot do much on its own as network
> stack wont be able to decipher the muxed frames.

Right.

> The operation of rmnet was to be agnostic of the underlying driver.
> The netdev model was chosen for it since it was easy to have a 
> rx_handler attach to the netdevice exposed by any of those drivers.

I really do think it's the wrong model though:

   1. The network stack cannot do anything with the muxed data, and so
      there's no value from that perspective in exposing it that way.
   2. The rx_handler attach thing is a bit of a red herring - if you have
      some other abstraction that the driver registers with, you can just
      say "send packets here" and then demux things properly, without
      having a netdev. Actually, I'd almost argue that rmnet should've just
      been a sort of encap/decap library that the drivers like qmi_wwan,
      rmnet_ipa and mhi use to talk to the device.
   3. Having this underlying netdev is actually very limiting. We found
      this with wifi a long time ago, and I suspect with 5G coming up
      you'll be in a similar situation. You'll want to do things like
      multi-queue, different hardware queues for different channels, etc.
      and muxing it all over a single netdev (and a single queue there)
      becomes an easily avoidable point of contention.
   4. (I thought about something else but it escapes me now)

> > Now, OTOH, this loses a bunch of benefits. We may want to be able to 
> > use
> > ethtool to flash a modem, start tcpdump on the underlying netdev
> > directly to see everything, etc.?
> > 
> 
> Yes, we use that underlying netdev to view the muxed raw IP frames in 
> tcpdump.

That's the easiest of all - just make the framework able to add a 
'sniffer' netdev that can see all the TX/RX for the other channels.

Which I already said, and you asked:

> Can you tell how this sniffer netdev works? 

Well, it's just an abstraction - a netdev that reports as received all
frames that the stack or underlying driver sees.

> If rmnet needed to create a
> separate netdev for monitoring the muxed raw IP frames, wouldnt that
> just serve the same purpose as that of underlying netdev.

No, it's different in that it's only there for debugging, only reports
all the frames the stack/driver saw, and doesn't interfere with your
actual operation like the underlying netdev does like I described above.

johannes


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-06 17:20     ` Daniele Palmas
@ 2019-04-08 19:51       ` Johannes Berg
  0 siblings, 0 replies; 18+ messages in thread
From: Johannes Berg @ 2019-04-08 19:51 UTC (permalink / raw)
  To: Daniele Palmas
  Cc: Bjørn Mork, netdev, Dan Williams,
	Subash Abhinov Kasiviswanathan, Sean Tranchetti,
	Aleksander Morgado

On Sat, 2019-04-06 at 19:20 +0200, Daniele Palmas wrote:
> 
> the qmi_wwan sysfs qmap feature, being very easy to use, is serving
> well for me and customers of the company I work for (mainly directly
> with libqmi, not ModemManager), 

Yeah, I don't doubt this. In fact, we could arguably provide the same
through some kind of common stack.

I sort of have to believe though that it's even more limiting that we
*don't* have any common interfaces, since that means you'll have to
rewrite your software even if you just want to switch to a different
Qualcomm modem (e.g. PCIe instead of USB).

> but I understand the need to have a
> common framework and will gladly test and provide feedback for any new
> development related to this.

Thanks :-)

johannes


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-08 19:49             ` Johannes Berg
@ 2019-04-11  3:54               ` Subash Abhinov Kasiviswanathan
  2019-04-12 12:01                 ` Johannes Berg
  0 siblings, 1 reply; 18+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2019-04-11  3:54 UTC (permalink / raw)
  To: Johannes Berg
  Cc: Dan Williams, Bjørn Mork, netdev, Sean Tranchetti,
	Daniele Palmas, Aleksander Morgado, netdev-owner

> OK. But it means that you have a very specific encapsulation mode on 
> top
> of the "netdev". I'm still not convinced we should actually make that a
> netdev, but I'll elaborate elsewhere.
> 
>> I recall Daniele also managed to get rmnet working with qmi_wwan
>> (with an additional patch in which I had made qmi_wwan a passthrough)
> 
> I guess that uses the same encapsulation then, yes, I see it now:
> qmi_wwan's struct qmimux_hdr and rmnet's struct rmnet_map_header are
> very similar.
> 
> Btw, I see that struct rmnet_map_header uses a bitfield - this seems to
> go down to the device so probably will not work right on both big and
> little endian.
> 

Yes, I have tested so far in big endian only. I need to add support for
little endian.

>> We need raw IP frames from a embedded device transmitted to a PC
>> and vice versa.
> 
> Sure. But you just need to encap them in some kind of ethernet frame to
> transport them on the wire, but don't really need to care much about
> how.
> 

These packets will be processed as raw IP muxed frames on the PC as 
well,
not as ethernet though.

>> Yes, the underlying netdev itself cannot do much on its own as network
>> stack wont be able to decipher the muxed frames.
> 
> Right.
> 
>> The operation of rmnet was to be agnostic of the underlying driver.
>> The netdev model was chosen for it since it was easy to have a
>> rx_handler attach to the netdevice exposed by any of those drivers.
> 
> I really do think it's the wrong model though:
> 
>    1. The network stack cannot do anything with the muxed data, and so
>       there's no value from that perspective in exposing it that way.
>    2. The rx_handler attach thing is a bit of a red herring - if you 
> have
>       some other abstraction that the driver registers with, you can 
> just
>       say "send packets here" and then demux things properly, without
>       having a netdev. Actually, I'd almost argue that rmnet should've 
> just
>       been a sort of encap/decap library that the drivers like 
> qmi_wwan,
>       rmnet_ipa and mhi use to talk to the device.

Currently, iproute2 can be used to add the underlying dev as real_dev 
and
create rmnet links over it (ip link add link rmnet_ipa0 name rmnet0 type 
rmnet
mux_id 1). Would this continue to work if -
1. the rmnet library were to be included directly as part of the 
underlying
driver itself
2. there is no underlying dev at all

>    3. Having this underlying netdev is actually very limiting. We found
>       this with wifi a long time ago, and I suspect with 5G coming up
>       you'll be in a similar situation. You'll want to do things like
>       multi-queue, different hardware queues for different channels, 
> etc.
>       and muxing it all over a single netdev (and a single queue there)
>       becomes an easily avoidable point of contention.
>    4. (I thought about something else but it escapes me now)
> 
>> > Now, OTOH, this loses a bunch of benefits. We may want to be able to
>> > use
>> > ethtool to flash a modem, start tcpdump on the underlying netdev
>> > directly to see everything, etc.?
>> >
>> 
>> Yes, we use that underlying netdev to view the muxed raw IP frames in
>> tcpdump.
> 
> That's the easiest of all - just make the framework able to add a
> 'sniffer' netdev that can see all the TX/RX for the other channels.
> 

One additional use of underlying netdev is for RPS.
This helps to separate out the processing of the underlying netdev and
rmnet. If rmnet were to be converted into a library, we would still need
this functionality.

Having said this, looking forward to trying out your patches :)

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-11  3:54               ` Subash Abhinov Kasiviswanathan
@ 2019-04-12 12:01                 ` Johannes Berg
  2019-04-12 14:27                   ` Bjørn Mork
  2019-04-14 19:09                   ` Subash Abhinov Kasiviswanathan
  0 siblings, 2 replies; 18+ messages in thread
From: Johannes Berg @ 2019-04-12 12:01 UTC (permalink / raw)
  To: Subash Abhinov Kasiviswanathan
  Cc: Dan Williams, Bjørn Mork, netdev, Sean Tranchetti,
	Daniele Palmas, Aleksander Morgado, netdev-owner

On Wed, 2019-04-10 at 21:54 -0600, Subash Abhinov Kasiviswanathan wrote:

> > > We need raw IP frames from a embedded device transmitted to a PC
> > > and vice versa.
> > 
> > Sure. But you just need to encap them in some kind of ethernet frame to
> > transport them on the wire, but don't really need to care much about
> > how.
> > 
> 
> These packets will be processed as raw IP muxed frames on the PC as 
> well, not as ethernet though.

But in order to transport them, they're always encapsulated in ethernet?

> Currently, iproute2 can be used to add the underlying dev as real_dev 
> and create rmnet links over it (ip link add link rmnet_ipa0 name rmnet0 type 
> rmnet mux_id 1). Would this continue to work if -
> 1. the rmnet library were to be included directly as part of the 
> underlying driver itself
> 2. there is no underlying dev at all

Yeah, this is the big question.

If there's no underlying netdev at all, then no, it wouldn't work.
Though, it could be "faked" in a sense, by doing two things:

a) having the driver/infra always create a default channel interface,
   say mux_id 0?
b) by treating

	ip link add link rmnet_ipa0 name rmnet0 type rmnet mux_id 1

   as not creating a new netdev *on top of* but rather *as sibling to*
   "rmnet0".


The alternative is to just keep rmnet and the underlying driver, but tie
them together a little more closely so that they can - together -
register with a hypothetical new WWAN framework.

See - even if we create such a framework in a way that it doesn't
require an underlying netdev (which I think is the better choice for
various reasons, particularly related to 5G), then there's nothing that
says that you *cannot* have it anyway and keep the exact same rmnet +
underlying driver model.

> One additional use of underlying netdev is for RPS.
> This helps to separate out the processing of the underlying netdev and
> rmnet. If rmnet were to be converted into a library, we would still need
> this functionality.

Hmm, not sure I understand this. If you do RPS/RSS then that's a
hardware function, and the netdev doesn't really come into play
immediately? If the underlying driver directly deals with multiple
netdevs that's actually an *advantage*, no?

johannes


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-12 12:01                 ` Johannes Berg
@ 2019-04-12 14:27                   ` Bjørn Mork
  2019-04-12 17:04                     ` Johannes Berg
  2019-04-14 19:09                   ` Subash Abhinov Kasiviswanathan
  1 sibling, 1 reply; 18+ messages in thread
From: Bjørn Mork @ 2019-04-12 14:27 UTC (permalink / raw)
  To: Johannes Berg
  Cc: Subash Abhinov Kasiviswanathan, Dan Williams, netdev,
	Sean Tranchetti, Daniele Palmas, Aleksander Morgado,
	netdev-owner

Johannes Berg <johannes@sipsolutions.net> writes:
> On Wed, 2019-04-10 at 21:54 -0600, Subash Abhinov Kasiviswanathan wrote:
>
>> These packets will be processed as raw IP muxed frames on the PC as 
>> well, not as ethernet though.
>
> But in order to transport them, they're always encapsulated in ethernet?

No. There is no ethernet encapsulation of QMAP muxed frames over USB.

Same goes for MBIM, BTW. The cdc_mbim driver adds ethernet encapsulation
on the host side, but that is just an unfortunate design decision based
on the flawed assumption that ethernet interfaces are easier to relate
to.


Bjørn

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-12 14:27                   ` Bjørn Mork
@ 2019-04-12 17:04                     ` Johannes Berg
  0 siblings, 0 replies; 18+ messages in thread
From: Johannes Berg @ 2019-04-12 17:04 UTC (permalink / raw)
  To: Bjørn Mork
  Cc: Subash Abhinov Kasiviswanathan, Dan Williams, netdev,
	Sean Tranchetti, Daniele Palmas, Aleksander Morgado,
	netdev-owner

On Fri, 2019-04-12 at 16:27 +0200, Bjørn Mork wrote:
> Johannes Berg <johannes@sipsolutions.net> writes:
> > On Wed, 2019-04-10 at 21:54 -0600, Subash Abhinov Kasiviswanathan wrote:
> > 
> > > These packets will be processed as raw IP muxed frames on the PC as 
> > > well, not as ethernet though.
> > 
> > But in order to transport them, they're always encapsulated in ethernet?
> 
> No. There is no ethernet encapsulation of QMAP muxed frames over USB.
> 
> Same goes for MBIM, BTW. The cdc_mbim driver adds ethernet encapsulation
> on the host side, but that is just an unfortunate design decision based
> on the flawed assumption that ethernet interfaces are easier to relate
> to.

Yes yes, sorry. I snipped too much - we were talking here in the context
of capturing the QMAP muxed frames remotely to another system, in
particular the strange bridge mode rmnet has.

And I said up the thread:

> Yeah, I get it, it's just done in a strange way. You'd think adding a
> tcpdump or some small application that just resends the packets directly
> received from the underlying "real_dev" using a ptype_all socket would
> be sufficient? Though perhaps not quite the same performance, but then
> you could easily not use an application but a dev_add_pack() thing? Or
> probably even tc's mirred?
> 
> And to extend that thought, tc's ife action would let you encapsulate
> the things you have in ethernet headers... I think.

So basically yes, I know we (should) have only IP frames on the session
netdevs, and only QMAP-muxed frames on the underlying netdev (assuming
it exists), but I don't see the point in having kernel code to add
ethernet headers to send it over some other netdev - you can trivially
solve all of that in userspace or quite possibly even in the kernel with
existing (tc) infrastructure.

johannes


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-12 12:01                 ` Johannes Berg
  2019-04-12 14:27                   ` Bjørn Mork
@ 2019-04-14 19:09                   ` Subash Abhinov Kasiviswanathan
  2019-04-15  8:27                     ` Johannes Berg
  1 sibling, 1 reply; 18+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2019-04-14 19:09 UTC (permalink / raw)
  To: Johannes Berg
  Cc: Dan Williams, Bjørn Mork, netdev, Sean Tranchetti,
	Daniele Palmas, Aleksander Morgado, netdev-owner

>> Currently, iproute2 can be used to add the underlying dev as real_dev
>> and create rmnet links over it (ip link add link rmnet_ipa0 name 
>> rmnet0 type
>> rmnet mux_id 1). Would this continue to work if -
>> 1. the rmnet library were to be included directly as part of the
>> underlying driver itself
>> 2. there is no underlying dev at all
> 
> Yeah, this is the big question.
> 
> If there's no underlying netdev at all, then no, it wouldn't work.
> Though, it could be "faked" in a sense, by doing two things:
> 
> a) having the driver/infra always create a default channel interface,
>    say mux_id 0?
> b) by treating
> 
> 	ip link add link rmnet_ipa0 name rmnet0 type rmnet mux_id 1
> 
>    as not creating a new netdev *on top of* but rather *as sibling to*
>    "rmnet0".
> 
> 
> The alternative is to just keep rmnet and the underlying driver, but 
> tie
> them together a little more closely so that they can - together -
> register with a hypothetical new WWAN framework.
> 
> See - even if we create such a framework in a way that it doesn't
> require an underlying netdev (which I think is the better choice for
> various reasons, particularly related to 5G), then there's nothing that
> says that you *cannot* have it anyway and keep the exact same rmnet +
> underlying driver model.
> 
> Hmm, not sure I understand this. If you do RPS/RSS then that's a
> hardware function, and the netdev doesn't really come into play
> immediately? If the underlying driver directly deals with multiple
> netdevs that's actually an *advantage*, no?

RPS is in SW only though.

I think this shouldn't be a concern if the existing underlying netdev 
model
could co-exist with the new framework.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: cellular modem driver APIs
  2019-04-14 19:09                   ` Subash Abhinov Kasiviswanathan
@ 2019-04-15  8:27                     ` Johannes Berg
  0 siblings, 0 replies; 18+ messages in thread
From: Johannes Berg @ 2019-04-15  8:27 UTC (permalink / raw)
  To: Subash Abhinov Kasiviswanathan
  Cc: Dan Williams, Bjørn Mork, netdev, Sean Tranchetti,
	Daniele Palmas, Aleksander Morgado, netdev-owner

On Sun, 2019-04-14 at 13:09 -0600, Subash Abhinov Kasiviswanathan wrote:

> > Hmm, not sure I understand this. If you do RPS/RSS then that's a
> > hardware function, and the netdev doesn't really come into play
> > immediately? If the underlying driver directly deals with multiple
> > netdevs that's actually an *advantage*, no?
> 
> RPS is in SW only though.

I'm sort of expecting that to change in 5G, but I guess designs will
vary.

> I think this shouldn't be a concern if the existing underlying netdev 
> model could co-exist with the new framework.

Indeed. I'm just thinking that maybe a new framework shouldn't *force*
this model.

johannes


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2019-04-15  8:28 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-03 21:15 cellular modem driver APIs Johannes Berg
2019-04-04  8:51 ` Bjørn Mork
2019-04-04  9:00   ` Johannes Berg
2019-04-04 15:52     ` Dan Williams
2019-04-04 19:16       ` Subash Abhinov Kasiviswanathan
2019-04-04 20:38         ` Johannes Berg
2019-04-04 21:00           ` Johannes Berg
2019-04-05  4:45           ` Subash Abhinov Kasiviswanathan
2019-04-06 17:22             ` Daniele Palmas
2019-04-08 19:49             ` Johannes Berg
2019-04-11  3:54               ` Subash Abhinov Kasiviswanathan
2019-04-12 12:01                 ` Johannes Berg
2019-04-12 14:27                   ` Bjørn Mork
2019-04-12 17:04                     ` Johannes Berg
2019-04-14 19:09                   ` Subash Abhinov Kasiviswanathan
2019-04-15  8:27                     ` Johannes Berg
2019-04-06 17:20     ` Daniele Palmas
2019-04-08 19:51       ` Johannes Berg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.