All of lore.kernel.org
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: Jesus Sanchez-Palencia <jesus.sanchez-palencia@intel.com>
Cc: netdev@vger.kernel.org, jhs@mojatatu.com,
	xiyou.wangcong@gmail.com, jiri@resnulli.us,
	vinicius.gomes@intel.com, richardcochran@gmail.com,
	anna-maria@linutronix.de, henrik@austad.us,
	John Stultz <john.stultz@linaro.org>,
	levi.pearson@harman.com, edumazet@google.com, willemb@google.com,
	mlichvar@redhat.com
Subject: Re: [RFC v3 net-next 13/18] net/sched: Introduce the TBS Qdisc
Date: Sun, 25 Mar 2018 13:46:32 +0200 (CEST)	[thread overview]
Message-ID: <alpine.DEB.2.21.1803241325500.1481@nanos.tec.linutronix.de> (raw)
In-Reply-To: <2897b562-06e0-0fcc-4fb1-e8c4469c0faa@intel.com>

On Fri, 23 Mar 2018, Jesus Sanchez-Palencia wrote:
> On 03/22/2018 03:52 PM, Thomas Gleixner wrote:
> > So what's the plan for this? Having TAS as a separate entity or TAS feeding
> > into the proposed 'basic' time transmission thing?
> 
> The second one, I guess.

That's just wrong. It won't work. See below.

> Elaborating, the plan is at some point having TAS as a separate entity,
> but which can use tbs for one of its classes (and cbs for another, and
> strict priority for everything else, etc).
>
> Basically, the design would something along the lines of 'taprio'. A root qdisc
> that is both time and priority aware, and capable of running a schedule for the
> port. That schedule can run inside the kernel with hrtimers, or just be
> offloaded into the controller if Qbv is supported on HW.
> 
> Because it would expose the inner traffic classes in a mq / mqprio / prio style,
> then it would allow for other per-queue qdiscs to be attached to it. On a system
> using the i210, for instance, we could then have tbs installed on traffic class
> 0 just dialing hw offload. The Qbv schedule would be running in SW on the TAS
> entity (i.e. 'taprio') which would be setting the packets' txtime before
> dequeueing packets on a fast path -> tbs -> NIC.
> 
> Similarly, other qdisc, like cbs, could be installed if all that traffic class
> requires is traffic shaping once its 'gate' is allowed to execute the selected
> tx algorithm attached to it.
> 
> > I've not yet seen a convincing argument why this low level stuff with all
> > of its weird flavours is superiour over something which reflects the basic
> > operating principle of TSN.
> 
> 
> As you know, not all TSN systems are designed the same. Take AVB systems, for
> example. These not always are running on networks that are aware of any time
> schedule, or at least not quite like what is described by Qbv.
> 
> On those systems there is usually a certain number of streams with different
> priorities that care mostly about having their bandwidth reserved along the
> network. The applications running on such systems are usually based on AVTP,
> thus they already have to calculate and set the "avtp presentation time"
> per-packet themselves. A Qbv scheduler would probably provide very little
> benefits to this domain, IMHO. For "talkers" of these AVB systems, shaping
> traffic using txtime (i.e. tbs) can provide a low-jitter alternative to cbs, for
> instance.

You're looking at it from particular use cases and try to accomodate for
them in the simplest possible way. I don't think that cuts it.

Let's take a step back and look at it from a more general POV without
trying to make it fit to any of the standards first. I'm deliberately NOT
using any of the standard defined terms.

At the (local) network level you have always an explicit plan. This plan
might range from no plan at all to an very elaborate plan which is strict
about when each node is allowed to TX a particular class of packets.

So lets assume we have the following picture:

   	       	  [NIC]
		    |
	 [ Time slice manager ]

Now in the simplest case, the time slice manager has no constraints and
exposes a single input which allows the application to say: "Send my packet
at time X". There is no restriction on 'time X' except if there is a time
collision with an already queued packet or the requested TX time has
already passed. That's close to what you implemented.

  Is the TX timestamp which you defined in the user space ABI a fixed
  scheduling point or is it a deadline?

  That's an important distinction and for this all to work accross various
  use cases you need a way to express that in the ABI. It might be an
  implicit property of the socket/channel to which the application connects
  to but still you want to express it from the application side to do
  proper sanity checking.

  Just think about stuff like audio/video streaming. The point of
  transmission does not have to be fixed if you have some intelligent
  controller at the receiving end which can buffer stuff. The only relevant
  information is the deadline, i.e. the latest point in time where the
  packet needs to go out on the wire in order to keep the stream steady at
  the consumer side. Having the notion of a deadline and that's the only
  thing the provider knows about allows you proper utilization by using an
  approriate scheduling algorithm like EDF.

  Contrary to that you want very explicit TX points for applications like
  automation control. For this kind of use case there is no wiggle room, it
  has to go out at a fixed time because that's the way control systems
  work.

  This is missing right now and you want to get that right from the very
  beginning. Duct taping it on the interface later on is a bad idea.

Now lets go one step further and create two time slices for whatever
purpose still on the single node (not network wide). You want to do that
because you want temporal separation of services. The reason might be
bandwidth guarantee, collission avoidance or whatever.

  How does the application which was written for the simple manager which
  had no restrictions learn about this?

  Does it learn it the hard way because now the packets which fall into the
  reserved timeslice are rejected? The way you created your interface, the
  answer is yes. That's patently bad as it requires to change the
  application once it runs on a partitioned node.

  So you really want a way for the application to query the timing
  constraints and perhaps other properties of the channel it connects
  to. And you want that now before the first application starts to use the
  new ABI. If the application developer does not use it, you still have to
  fix the application, but you have to fix it because the developer was a
  lazy bastard and not because the design was bad. That's a major
  difference.

Now that we have two time slices, I'm coming back to your idea of having
your proposed qdisc as the entity which sits right at the network
interface. Lets assume the following:

   [Slice 1: Timed traffic ] [Slice 2: Other Traffic]

  Lets assume further that 'Other traffic' has no idea about time slices at
  all. It's just stuff like ssh, http, etc. So if you keep that design

       	         [ NIC ]
  	            |
           [ Time slice manager ]
	       |          |
     [ Timed traffic ]  [ Other traffic ]

  feeding into your proposed TBS thingy, then in case of underutilization
  of the 'Timed traffic' slot you prevent utilization of remaining time by
  pulling 'Other traffic' into the empty slots because 'Other traffic' is
  restricted to Slice 2 and 'Timed traffic' does not know about 'Other
  traffic' at all. And no, you cannot make TBS magically pull packets from
  'Other traffic' just because its not designed for it. So your design
  becomes strictly partitioned and forces underutilization.

  That's becoming even worse, when you switch to the proposed full hardware
  offloading scheme. In that case the only way to do admission control is
  the TX time of the farthest out packet which is already queued. That
  might work for a single application which controls all of the network
  traffic, but it wont ever work for something more flexible. The more I
  think about it the less interesting full hardware offload becomes. It's
  nice if you have a fully strict scheduling plan for everything, but then
  your admission control is bogus once you have more than one channel as
  input. So yes, it can be used when the card supports it and you have
  other ways to enforce admission control w/o hurting utilization or if you
  don't care about utilization at all. It's also useful for channels which
  are strictly isolated and have a defined TX time. Such traffic can be
  directly fed into the hardware.

Coming back to the overall scheme. If you start upfront with a time slice
manager which is designed to:

  - Handle multiple channels

  - Expose the time constraints, properties per channel

then you can fit all kind of use cases, whether designed by committee or
not. You can configure that thing per node or network wide. It does not
make a difference. The only difference are the resulting constraints.

We really want to accomodate everything between the 'no restrictions' and
the 'full network wide explicit plan' case. And it's not rocket science
once you realize that the 'no restrictions' case is just a subset of the
'full network wide explicit plan' simply because it exposes a single
channel where:

	slice period = slice length.

It's that easy, but at the same time you teach the application from the
very beginning to ask for the time constraints so if it runs on a more
sophisticated system/network, then it will see a different slice period and
a different slice length and can accomodate or react in a useful way
instead of just dying on the 17th packet it tries to send because it is
rejected.

We really want to design for this as we want to be able to run the video
stream on the same node and network which does robot control without
changing the video application. That's not a theoretical problem. These use
cases exist today, but they are forced to use different networks for the
two. But if you look at the utilization of both then they very well fit
into one and industry certainly wants to go for that.

That implies that you need constraint aware applications from the very
beginning and that requires a proper ABI in the first place. The proposed
ad hoc mode does not qualify. Please be aware, that you are creating a user
space ABI and not a random in kernel interface which can be changed at any
given time.

So lets look once more at the picture in an abstract way:

     	       [ NIC ]
	          |
	 [ Time slice manager ]
	    |           |
         [ Ch 0 ] ... [ Ch N ]

So you have a bunch of properties here:

1) Number of Channels ranging from 1 to N

2) Start point, slice period and slice length per channel

3) Queueing modes assigned per channel. Again that might be anything from
   'feed through' over FIFO, PRIO to more complex things like EDF.

   The queueing mode can also influence properties like the meaning of the
   TX time, i.e. strict or deadline.

Please sit back and map your use cases, standards or whatever you care
about into the above and I would be very surprised if they don't fit.

Thanks,

	tglx

  reply	other threads:[~2018-03-25 11:46 UTC|newest]

Thread overview: 129+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-07  1:12 [RFC v3 net-next 00/18] Time based packet transmission Jesus Sanchez-Palencia
2018-03-07  1:12 ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 01/18] sock: Fix SO_ZEROCOPY switch case Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07 16:58   ` Willem de Bruijn
2018-03-07 16:58     ` [Intel-wired-lan] " Willem de Bruijn
2018-03-07  1:12 ` [RFC v3 net-next 02/18] net: Clear skb->tstamp only on the forwarding path Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07 16:59   ` Willem de Bruijn
2018-03-07 16:59     ` [Intel-wired-lan] " Willem de Bruijn
2018-03-07 22:03     ` Jesus Sanchez-Palencia
2018-03-07 22:03       ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 03/18] posix-timers: Add CLOCKID_INVALID mask Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 04/18] net: Add a new socket option for a future transmit time Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 05/18] net: ipv4: raw: Hook into time based transmission Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07 17:00   ` Willem de Bruijn
2018-03-07 17:00     ` [Intel-wired-lan] " Willem de Bruijn
2018-03-07  1:12 ` [RFC v3 net-next 06/18] net: ipv4: udp: " Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07 17:00   ` Willem de Bruijn
2018-03-07 17:00     ` [Intel-wired-lan] " Willem de Bruijn
2018-03-07  1:12 ` [RFC v3 net-next 07/18] net: packet: " Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 08/18] net: SO_TXTIME: Add clockid and drop_if_late params Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  2:53   ` Eric Dumazet
2018-03-07  2:53     ` [Intel-wired-lan] " Eric Dumazet
2018-03-07  5:24     ` Richard Cochran
2018-03-07  5:24       ` [Intel-wired-lan] " Richard Cochran
2018-03-07 17:01       ` Willem de Bruijn
2018-03-07 17:01         ` [Intel-wired-lan] " Willem de Bruijn
2018-03-07 17:35         ` Richard Cochran
2018-03-07 17:35           ` [Intel-wired-lan] " Richard Cochran
2018-03-07 17:37           ` Richard Cochran
2018-03-07 17:37             ` [Intel-wired-lan] " Richard Cochran
2018-03-07 17:47             ` Eric Dumazet
2018-03-07 17:47               ` [Intel-wired-lan] " Eric Dumazet
2018-03-08 16:44               ` Richard Cochran
2018-03-08 16:44                 ` [Intel-wired-lan] " Richard Cochran
2018-03-08 17:56                 ` Jesus Sanchez-Palencia
2018-03-08 17:56                   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-21 12:58       ` Thomas Gleixner
2018-03-21 12:58         ` [Intel-wired-lan] " Thomas Gleixner
2018-03-21 14:59         ` Richard Cochran
2018-03-21 14:59           ` [Intel-wired-lan] " Richard Cochran
2018-03-21 15:11           ` Thomas Gleixner
2018-03-07 21:52     ` Jesus Sanchez-Palencia
2018-03-07 21:52       ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07 22:45       ` Eric Dumazet
2018-03-07 22:45         ` [Intel-wired-lan] " Eric Dumazet
2018-03-07 23:03         ` David Miller
2018-03-07 23:03           ` [Intel-wired-lan] " David Miller
2018-03-08 11:37         ` Miroslav Lichvar
2018-03-08 11:37           ` [Intel-wired-lan] " Miroslav Lichvar
2018-03-08 16:25           ` David Miller
2018-03-08 16:25             ` [Intel-wired-lan] " David Miller
2018-03-07  1:12 ` [RFC v3 net-next 09/18] net: ipv4: raw: Handle remaining txtime parameters Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 10/18] net: ipv4: udp: " Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 11/18] net: packet: " Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 12/18] net/sched: Allow creating a Qdisc watchdog with other clocks Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 13/18] net/sched: Introduce the TBS Qdisc Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-21 13:46   ` Thomas Gleixner
2018-03-21 13:46     ` [Intel-wired-lan] " Thomas Gleixner
2018-03-21 22:29     ` Thomas Gleixner
2018-03-22 20:25       ` Jesus Sanchez-Palencia
2018-03-22 22:52         ` Thomas Gleixner
2018-03-24  0:34           ` Jesus Sanchez-Palencia
2018-03-25 11:46             ` Thomas Gleixner [this message]
2018-03-27 23:26               ` Jesus Sanchez-Palencia
2018-03-28  7:48                 ` Thomas Gleixner
2018-03-28 13:07                   ` Henrik Austad
2018-04-09 16:36                   ` Jesus Sanchez-Palencia
2018-04-10 12:37                     ` Thomas Gleixner
2018-04-10 21:24                       ` Jesus Sanchez-Palencia
2018-04-11 20:16                         ` Thomas Gleixner
2018-04-11 20:31                           ` Ivan Briano
2018-04-11 23:38                           ` Jesus Sanchez-Palencia
2018-04-12 15:03                             ` Richard Cochran
2018-04-12 15:19                               ` Miroslav Lichvar
2018-04-19 10:03                             ` Thomas Gleixner
2018-03-22 20:29     ` Jesus Sanchez-Palencia
2018-03-22 22:11       ` Thomas Gleixner
2018-03-22 23:26         ` Jesus Sanchez-Palencia
2018-03-23  8:49           ` Thomas Gleixner
2018-03-23 23:34             ` Jesus Sanchez-Palencia
2018-04-23 18:21     ` Jesus Sanchez-Palencia
2018-04-23 18:21       ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-04-24  8:50       ` Thomas Gleixner
2018-04-24  8:50         ` [Intel-wired-lan] " Thomas Gleixner
2018-04-24 13:50         ` David Miller
2018-04-24 13:50           ` [Intel-wired-lan] " David Miller
2018-03-07  1:12 ` [RFC v3 net-next 14/18] net/sched: Add HW offloading capability to TBS Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-21 14:22   ` Thomas Gleixner
2018-03-21 14:22     ` [Intel-wired-lan] " Thomas Gleixner
2018-03-21 15:03     ` Richard Cochran
2018-03-21 15:03       ` [Intel-wired-lan] " Richard Cochran
2018-03-21 16:18       ` Thomas Gleixner
2018-03-22 22:01         ` Jesus Sanchez-Palencia
2018-03-22 23:15     ` Jesus Sanchez-Palencia
2018-03-22 23:15       ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-23  8:51       ` Thomas Gleixner
2018-03-23  8:51         ` [Intel-wired-lan] " Thomas Gleixner
2018-03-07  1:12 ` [RFC v3 net-next 15/18] igb: Refactor igb_configure_cbs() Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 16/18] igb: Only change Tx arbitration when CBS is on Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 17/18] igb: Refactor igb_offload_cbs() Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 18/18] igb: Add support for TBS offload Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  5:28 ` [RFC v3 net-next 00/18] Time based packet transmission Richard Cochran
2018-03-07  5:28   ` [Intel-wired-lan] " Richard Cochran
2018-03-08 14:09 ` Henrik Austad
2018-03-08 14:09   ` [Intel-wired-lan] " Henrik Austad
2018-03-08 18:06   ` Jesus Sanchez-Palencia
2018-03-08 18:06     ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-08 22:54     ` Henrik Austad
2018-03-08 22:54       ` [Intel-wired-lan] " Henrik Austad
2018-03-08 23:58       ` Jesus Sanchez-Palencia
2018-03-08 23:58         ` [Intel-wired-lan] " Jesus Sanchez-Palencia

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.21.1803241325500.1481@nanos.tec.linutronix.de \
    --to=tglx@linutronix.de \
    --cc=anna-maria@linutronix.de \
    --cc=edumazet@google.com \
    --cc=henrik@austad.us \
    --cc=jesus.sanchez-palencia@intel.com \
    --cc=jhs@mojatatu.com \
    --cc=jiri@resnulli.us \
    --cc=john.stultz@linaro.org \
    --cc=levi.pearson@harman.com \
    --cc=mlichvar@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=richardcochran@gmail.com \
    --cc=vinicius.gomes@intel.com \
    --cc=willemb@google.com \
    --cc=xiyou.wangcong@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.