All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jesus Sanchez-Palencia <jesus.sanchez-palencia@intel.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: netdev@vger.kernel.org, jhs@mojatatu.com,
	xiyou.wangcong@gmail.com, jiri@resnulli.us,
	vinicius.gomes@intel.com, richardcochran@gmail.com,
	anna-maria@linutronix.de, henrik@austad.us,
	John Stultz <john.stultz@linaro.org>,
	levi.pearson@harman.com, edumazet@google.com, willemb@google.com,
	mlichvar@redhat.com
Subject: Re: [RFC v3 net-next 13/18] net/sched: Introduce the TBS Qdisc
Date: Thu, 22 Mar 2018 13:25:44 -0700	[thread overview]
Message-ID: <65da0648-b835-a171-3986-2d1ddcb8ea10@intel.com> (raw)
In-Reply-To: <alpine.DEB.2.21.1803211758140.3754@nanos.tec.linutronix.de>

Hi Thomas,


On 03/21/2018 03:29 PM, Thomas Gleixner wrote:
> On Wed, 21 Mar 2018, Thomas Gleixner wrote:
>> If you look at the use cases of TDM in various fields then FIFO mode is
>> pretty much useless. In industrial/automotive fieldbus applications the
>> various time slices are filled by different threads or even processes.
> 
> That brings me to a related question. The TDM cases I'm familiar with which
> aim to use this utilize multiple periodic time slices, aka 802.1Qbv
> time-aware scheduling.
> 
> Simple example:
> 
> [1a][1b][1c][1d]		[1a][1b][1c][1d]		[.....
> 		[2a][2b]			[2c][2d]
> 			[3a]				[3b]
> 			    [4a]			    [4b]
> ---------------------------------------------------------------------->	t		    
> 
> where 1-4 is the slice level and a-d are network nodes.
> 
> In most cases the slice levels on a node are handled by different
> applications or threads. Some of the protocols utilize dedicated time slice
> levels - lets assume '4' in the above example - to run general network
> traffic which might even be allowed to have collisions, i.e. [4a-d] would
> become [4] and any node can send; the involved componets like switches are
> supposed to handle that.
> 
> I'm not seing how TBS is going to assist with any of that. It requires
> everything to be handled at the application level. Not really useful
> especially not for general traffic which does not know about the scheduling
> bands at all.
> 
> If you look at an industrial control node. It basically does:
> 
> 	queue_first_packet(tx, slice1);
>    	while (!stop) {
> 		if (wait_for_packet(rx) == ERROR)
> 			goto errorhandling;
> 		tx = do_computation(rx);
> 		queue_next_tx(tx, slice1);
> 	}
> 
> that's a pretty common pattern for these kind of applications. For audio
> sources queue_next() might be triggered by the input sampler which needs to
> be synchronized to the network slices anyway in order to work properly.
> 
> TBS per current implementation is nice as a proof of concept, but it solves
> just a small portion of the complete problem space. I have the suspicion
> that this was 'designed' to replace the user space hack in the AVNU stack
> with something close to it. Not really a good plan to be honest.
> 
> I think what we really want is a strict periodic scheduler which supports
> multiple slices as shown above because thats what all relevant TDM use
> cases need: A/V, industrial fieldbusses .....
> 
>   |---------------------------------------------------------|
>   |                                                         |
>   |                           TAS                           |<- Config
>   |    1               2               3               4    |
>   |---------------------------------------------------------|
>        |               |               |               |
>        |               |               |               |
>        |               |               |               |
>        |               |               |               |
>   [DirectSocket]   [Qdisc FIFO]   [Qdisc Prio]     [Qdisc FIFO]
>                        |               |               |
> 		       |               |               |
> 		    [Socket]   	    [Socket]     [General traffic]
> 
> 
> The interesting thing here is that it does not require any time stamp
> information brought in from the application. That's especially good for
> general network traffic which is routed through a dedicated time slot. If
> we don't have that then we need a user space scheduler which does exactly
> the same thing and we have to route the general traffic out to user space
> and back into the kernel, which is obviously a pointless exercise.
> 
> There are all kind of TDM schemes out there which are not directly driven
> by applications, but rather route categorized traffic like VLANs through
> dedicated time slices. That works pretty well with the above scheme because
> in that case the applications might be completely oblivious about the tx
> time schedule.
> 
> Surely there are protocols which do not utilize every time slice they could
> use, so we need a way to tell the number of empty slices between two
> consecutive packets. There are also different policies vs. the unused time
> slices, like sending dummy frames or just nothing which wants to be
> addressed, but I don't think that changes the general approach.
> 
> There might be some special cases for setup or node hotplug, but the
> protocols I'm familiar with handle these in dedicated time slices or
> through general traffic so it should just fit in.
> 
> I'm surely missing some details, but from my knowledge about the protocols
> which want to utilize this, the general direction should be fine.
> 
> Feel free to tell me that I'm missing the point completely though :)
> 
> Thoughts?


We agree with most of the above. :)
Actually, last year Vinicius shared our ideas for a "time-aware priority" root
qdisc as part of the cbs RFC cover letter, dubbed 'taprio':

https://patchwork.ozlabs.org/cover/808504/

Our plan was to work directly with the Qbv-like scheduling (per-port) just after
the cbs qdisc (Qav), but the feedback here and offline was that there were use
cases for a more simplistic launchtime approach (per-queue) as well. We've
decided to invest on it first (and postpone the 'taprio' qdisc until there was
NIC available with HW support for it, basically).

You are right, and we agree, that using tbs for a per-port schedule of any sort
will require a SW scheduler to be developed on top of it, but we've never said
the contrary either. Our vision has always been that these are separate
mechanisms with different use-cases, so we do see the value for the kernel to
provide both.

In other words, tbs is not the final solution for Qbv, and we agree that a 'TAS'
qdisc is still necessary. And due to the wide range of applications and hw being
used for those out there, we need both specially given that one does not block
the other.


What do you think?

Thanks,
Jesus

  reply	other threads:[~2018-03-22 20:28 UTC|newest]

Thread overview: 129+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-07  1:12 [RFC v3 net-next 00/18] Time based packet transmission Jesus Sanchez-Palencia
2018-03-07  1:12 ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 01/18] sock: Fix SO_ZEROCOPY switch case Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07 16:58   ` Willem de Bruijn
2018-03-07 16:58     ` [Intel-wired-lan] " Willem de Bruijn
2018-03-07  1:12 ` [RFC v3 net-next 02/18] net: Clear skb->tstamp only on the forwarding path Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07 16:59   ` Willem de Bruijn
2018-03-07 16:59     ` [Intel-wired-lan] " Willem de Bruijn
2018-03-07 22:03     ` Jesus Sanchez-Palencia
2018-03-07 22:03       ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 03/18] posix-timers: Add CLOCKID_INVALID mask Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 04/18] net: Add a new socket option for a future transmit time Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 05/18] net: ipv4: raw: Hook into time based transmission Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07 17:00   ` Willem de Bruijn
2018-03-07 17:00     ` [Intel-wired-lan] " Willem de Bruijn
2018-03-07  1:12 ` [RFC v3 net-next 06/18] net: ipv4: udp: " Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07 17:00   ` Willem de Bruijn
2018-03-07 17:00     ` [Intel-wired-lan] " Willem de Bruijn
2018-03-07  1:12 ` [RFC v3 net-next 07/18] net: packet: " Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 08/18] net: SO_TXTIME: Add clockid and drop_if_late params Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  2:53   ` Eric Dumazet
2018-03-07  2:53     ` [Intel-wired-lan] " Eric Dumazet
2018-03-07  5:24     ` Richard Cochran
2018-03-07  5:24       ` [Intel-wired-lan] " Richard Cochran
2018-03-07 17:01       ` Willem de Bruijn
2018-03-07 17:01         ` [Intel-wired-lan] " Willem de Bruijn
2018-03-07 17:35         ` Richard Cochran
2018-03-07 17:35           ` [Intel-wired-lan] " Richard Cochran
2018-03-07 17:37           ` Richard Cochran
2018-03-07 17:37             ` [Intel-wired-lan] " Richard Cochran
2018-03-07 17:47             ` Eric Dumazet
2018-03-07 17:47               ` [Intel-wired-lan] " Eric Dumazet
2018-03-08 16:44               ` Richard Cochran
2018-03-08 16:44                 ` [Intel-wired-lan] " Richard Cochran
2018-03-08 17:56                 ` Jesus Sanchez-Palencia
2018-03-08 17:56                   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-21 12:58       ` Thomas Gleixner
2018-03-21 12:58         ` [Intel-wired-lan] " Thomas Gleixner
2018-03-21 14:59         ` Richard Cochran
2018-03-21 14:59           ` [Intel-wired-lan] " Richard Cochran
2018-03-21 15:11           ` Thomas Gleixner
2018-03-07 21:52     ` Jesus Sanchez-Palencia
2018-03-07 21:52       ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07 22:45       ` Eric Dumazet
2018-03-07 22:45         ` [Intel-wired-lan] " Eric Dumazet
2018-03-07 23:03         ` David Miller
2018-03-07 23:03           ` [Intel-wired-lan] " David Miller
2018-03-08 11:37         ` Miroslav Lichvar
2018-03-08 11:37           ` [Intel-wired-lan] " Miroslav Lichvar
2018-03-08 16:25           ` David Miller
2018-03-08 16:25             ` [Intel-wired-lan] " David Miller
2018-03-07  1:12 ` [RFC v3 net-next 09/18] net: ipv4: raw: Handle remaining txtime parameters Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 10/18] net: ipv4: udp: " Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 11/18] net: packet: " Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 12/18] net/sched: Allow creating a Qdisc watchdog with other clocks Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 13/18] net/sched: Introduce the TBS Qdisc Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-21 13:46   ` Thomas Gleixner
2018-03-21 13:46     ` [Intel-wired-lan] " Thomas Gleixner
2018-03-21 22:29     ` Thomas Gleixner
2018-03-22 20:25       ` Jesus Sanchez-Palencia [this message]
2018-03-22 22:52         ` Thomas Gleixner
2018-03-24  0:34           ` Jesus Sanchez-Palencia
2018-03-25 11:46             ` Thomas Gleixner
2018-03-27 23:26               ` Jesus Sanchez-Palencia
2018-03-28  7:48                 ` Thomas Gleixner
2018-03-28 13:07                   ` Henrik Austad
2018-04-09 16:36                   ` Jesus Sanchez-Palencia
2018-04-10 12:37                     ` Thomas Gleixner
2018-04-10 21:24                       ` Jesus Sanchez-Palencia
2018-04-11 20:16                         ` Thomas Gleixner
2018-04-11 20:31                           ` Ivan Briano
2018-04-11 23:38                           ` Jesus Sanchez-Palencia
2018-04-12 15:03                             ` Richard Cochran
2018-04-12 15:19                               ` Miroslav Lichvar
2018-04-19 10:03                             ` Thomas Gleixner
2018-03-22 20:29     ` Jesus Sanchez-Palencia
2018-03-22 22:11       ` Thomas Gleixner
2018-03-22 23:26         ` Jesus Sanchez-Palencia
2018-03-23  8:49           ` Thomas Gleixner
2018-03-23 23:34             ` Jesus Sanchez-Palencia
2018-04-23 18:21     ` Jesus Sanchez-Palencia
2018-04-23 18:21       ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-04-24  8:50       ` Thomas Gleixner
2018-04-24  8:50         ` [Intel-wired-lan] " Thomas Gleixner
2018-04-24 13:50         ` David Miller
2018-04-24 13:50           ` [Intel-wired-lan] " David Miller
2018-03-07  1:12 ` [RFC v3 net-next 14/18] net/sched: Add HW offloading capability to TBS Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-21 14:22   ` Thomas Gleixner
2018-03-21 14:22     ` [Intel-wired-lan] " Thomas Gleixner
2018-03-21 15:03     ` Richard Cochran
2018-03-21 15:03       ` [Intel-wired-lan] " Richard Cochran
2018-03-21 16:18       ` Thomas Gleixner
2018-03-22 22:01         ` Jesus Sanchez-Palencia
2018-03-22 23:15     ` Jesus Sanchez-Palencia
2018-03-22 23:15       ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-23  8:51       ` Thomas Gleixner
2018-03-23  8:51         ` [Intel-wired-lan] " Thomas Gleixner
2018-03-07  1:12 ` [RFC v3 net-next 15/18] igb: Refactor igb_configure_cbs() Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 16/18] igb: Only change Tx arbitration when CBS is on Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 17/18] igb: Refactor igb_offload_cbs() Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  1:12 ` [RFC v3 net-next 18/18] igb: Add support for TBS offload Jesus Sanchez-Palencia
2018-03-07  1:12   ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-07  5:28 ` [RFC v3 net-next 00/18] Time based packet transmission Richard Cochran
2018-03-07  5:28   ` [Intel-wired-lan] " Richard Cochran
2018-03-08 14:09 ` Henrik Austad
2018-03-08 14:09   ` [Intel-wired-lan] " Henrik Austad
2018-03-08 18:06   ` Jesus Sanchez-Palencia
2018-03-08 18:06     ` [Intel-wired-lan] " Jesus Sanchez-Palencia
2018-03-08 22:54     ` Henrik Austad
2018-03-08 22:54       ` [Intel-wired-lan] " Henrik Austad
2018-03-08 23:58       ` Jesus Sanchez-Palencia
2018-03-08 23:58         ` [Intel-wired-lan] " Jesus Sanchez-Palencia

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=65da0648-b835-a171-3986-2d1ddcb8ea10@intel.com \
    --to=jesus.sanchez-palencia@intel.com \
    --cc=anna-maria@linutronix.de \
    --cc=edumazet@google.com \
    --cc=henrik@austad.us \
    --cc=jhs@mojatatu.com \
    --cc=jiri@resnulli.us \
    --cc=john.stultz@linaro.org \
    --cc=levi.pearson@harman.com \
    --cc=mlichvar@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=richardcochran@gmail.com \
    --cc=tglx@linutronix.de \
    --cc=vinicius.gomes@intel.com \
    --cc=willemb@google.com \
    --cc=xiyou.wangcong@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.