All of lore.kernel.org
 help / color / mirror / Atom feed
From: Rob Herring <robh@kernel.org>
To: One Thousand Gnomes <gnomes@lxorguk.ukuu.org.uk>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Marcel Holtmann <marcel@holtmann.org>,
	Jiri Slaby <jslaby@suse.com>, Sebastian Reichel <sre@kernel.org>,
	Pavel Machek <pavel@ucw.cz>,
	Peter Hurley <peter@hurleysoftware.com>,
	NeilBrown <neil@brown.name>,
	"Dr . H . Nikolaus Schaller" <hns@goldelico.com>,
	Arnd Bergmann <arnd@arndb.de>,
	Linus Walleij <linus.walleij@linaro.org>,
	"open list:BLUETOOTH DRIVERS" <linux-bluetooth@vger.kernel.org>,
	"linux-serial@vger.kernel.org" <linux-serial@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH 0/3] UART slave device bus
Date: Thu, 25 Aug 2016 11:40:07 -0500	[thread overview]
Message-ID: <CAL_JsqJ_5+Em5SLSuqiJ3SnazhyEY2XaYh6N1AtR8_+MdcezMg@mail.gmail.com> (raw)
In-Reply-To: <20160818160449.328b2eec@lxorguk.ukuu.org.uk>

On Thu, Aug 18, 2016 at 10:04 AM, One Thousand Gnomes
<gnomes@lxorguk.ukuu.org.uk> wrote:
>> No, the code should be fast as it is so simple. I assume there is some
>> reason the tty buffering is more complex than just a circular buffer.
>
> I would suggest you read n_tty.c carefully and then it'll make a fair bit
> of sense. It has to interlock multiple reader/writes with discipline
> changes and flushes of pending data. At the same time a received
> character may cause output changes including bytes to be queued for
> transmit and the entire lot must not excessively recurse.
>
> It's fun and it took years to make work safely but basically you need to
> handle a simultaneous ldisc change, config change, read of data from the
> buffers, receive, transmit and the receive causing the transmit status to
> change and maybe other transmits, that might have to be sent with
> priority. It's fun 8)
>
> The good news is that nobody but n_tty and maybe n_irda cares on the rx
> side. Every other ldisc consumes the bytes immediately. IRDA hasn't worked
> for years anyway.
>
>> My best guess is because the tty layer has to buffer things for
>> userspace and userspace can be slow to read? Do line disciplines make
>> assumptions about the tty buffering? Is 4KB enough buffering?
>
> RTFS but to save you a bit of effort
>
> 1. 4K is not enough, 64K is not always sufficient, this is why we have
> all the functionality you appear to want to re-invent already in the tty
> buffer logic of the tty_port
> 2. Only n_tty actually uses the tty_port layer buffering
> 3. The ring buffer used for dumb uarts is entirely about latency limits
> on low end processors and only used by some uarts anyway.
>
>> Also, the current receive implementation has no concept of blocking or
>> timeout. Should the uart_dev_rx() function return when there's no more
>> data or wait (with timeout) until all requested data is received?
>> (Probably do all of them is my guess).
>
> Your rx routine needs to be able to run in IRQ context, not block and
> complete in very very short time scales because on some hardware you have
> exactly 9 bit times to recover the data byte and clear the IRQ done.
> Serial really stretches some of the low end embedded processors running
> at 56K/115200, and RS485 at 4Mbits even with 8 bytes of buffering is
> pretty tight. Thus you need very fast buffers for just about any use case.
> Dumb uarts you'll need to keep the existing ring buffer or similar
> (moving to a kfifo would slightly improve performance I think) and queue
> after.
>
>> >> - Convert a real driver/line discipline over to UART bus.
>> >
>> > That's going to be the real test, I recommend trying that as soon as
>> > possible as it will show where the real pain points are :)
>
> The locking. It's taken ten years to debug the current line discipline
> change locking. If you want to be able to switch stuff kernel side
> however it's somewhat easier.
>
> The change should be
>
> Add tty_port->rx(uint8_t *data,uint8_t *flags, unsigned int len)
>
> The semantics of tty_port->rx are
>
> - You may not assume a tty is bound to this port
> - You may be called in IRQ context, but are guaranteed not to get
>   parallel calls for the same port
> - When you return the bytes you passed are history
>
> At that point you can set tty_port->rx to point to the
> tty_flip_buffer_push() and everyone can use it. Slow ones will want to
> queue to a ring buffer then do tty_port->rx (where we do the flush_buffer
> now), fast ones will do the ->rx directly.

Other than doing DMA, I did not find any examples of UARTs doing
internal rx ring buffers. Most/all the non-DMA cases do
tty_insert_flip_char directly in the ISR. The flow is insert a series
of flags and characters as we process the receive status and then
trigger a flush of the buffer at the end. That doesn't match up with
what you are proposing for how tty_port->rx would work. That would
change the receive ISR processing in all the drivers quite a bit.
Either we'd have to call tty_port->rx a character at a time or
implement some temporary buffer. I don't think we want to call things
like BT receive code a byte at a time. This needs to be a layer
higher. flush_to_ldisc either needs to be duplicated to handle
tty_port->rx or generalized to call either tty_port->rx or ldisc
receive_buf. I'm not sure what to do about ldisc ref counting in the
latter case.

Rob

WARNING: multiple messages have this Message-ID (diff)
From: Rob Herring <robh-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
To: One Thousand Gnomes
	<gnomes-qBU/x9rampVanCEyBjwyrvXRex20P6io@public.gmane.org>
Cc: Greg Kroah-Hartman
	<gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r@public.gmane.org>,
	Marcel Holtmann <marcel-kz+m5ild9QBg9hUCZPvPmw@public.gmane.org>,
	Jiri Slaby <jslaby-IBi9RG/b67k@public.gmane.org>,
	Sebastian Reichel <sre-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Pavel Machek <pavel-+ZI9xUNit7I@public.gmane.org>,
	Peter Hurley
	<peter-WaGBZJeGNqdsbIuE7sb01tBPR1lH4CV8@public.gmane.org>,
	NeilBrown <neil-+NVA1uvv1dVBDLzU/O5InQ@public.gmane.org>,
	"Dr . H . Nikolaus Schaller"
	<hns-xXXSsgcRVICgSpxsJD1C4w@public.gmane.org>,
	Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>,
	Linus Walleij
	<linus.walleij-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>,
	"open list:BLUETOOTH DRIVERS"
	<linux-bluetooth-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"linux-serial-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-serial-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
Subject: Re: [RFC PATCH 0/3] UART slave device bus
Date: Thu, 25 Aug 2016 11:40:07 -0500	[thread overview]
Message-ID: <CAL_JsqJ_5+Em5SLSuqiJ3SnazhyEY2XaYh6N1AtR8_+MdcezMg@mail.gmail.com> (raw)
In-Reply-To: <20160818160449.328b2eec-qBU/x9rampVanCEyBjwyrvXRex20P6io@public.gmane.org>

On Thu, Aug 18, 2016 at 10:04 AM, One Thousand Gnomes
<gnomes-qBU/x9rampVanCEyBjwyrvXRex20P6io@public.gmane.org> wrote:
>> No, the code should be fast as it is so simple. I assume there is some
>> reason the tty buffering is more complex than just a circular buffer.
>
> I would suggest you read n_tty.c carefully and then it'll make a fair bit
> of sense. It has to interlock multiple reader/writes with discipline
> changes and flushes of pending data. At the same time a received
> character may cause output changes including bytes to be queued for
> transmit and the entire lot must not excessively recurse.
>
> It's fun and it took years to make work safely but basically you need to
> handle a simultaneous ldisc change, config change, read of data from the
> buffers, receive, transmit and the receive causing the transmit status to
> change and maybe other transmits, that might have to be sent with
> priority. It's fun 8)
>
> The good news is that nobody but n_tty and maybe n_irda cares on the rx
> side. Every other ldisc consumes the bytes immediately. IRDA hasn't worked
> for years anyway.
>
>> My best guess is because the tty layer has to buffer things for
>> userspace and userspace can be slow to read? Do line disciplines make
>> assumptions about the tty buffering? Is 4KB enough buffering?
>
> RTFS but to save you a bit of effort
>
> 1. 4K is not enough, 64K is not always sufficient, this is why we have
> all the functionality you appear to want to re-invent already in the tty
> buffer logic of the tty_port
> 2. Only n_tty actually uses the tty_port layer buffering
> 3. The ring buffer used for dumb uarts is entirely about latency limits
> on low end processors and only used by some uarts anyway.
>
>> Also, the current receive implementation has no concept of blocking or
>> timeout. Should the uart_dev_rx() function return when there's no more
>> data or wait (with timeout) until all requested data is received?
>> (Probably do all of them is my guess).
>
> Your rx routine needs to be able to run in IRQ context, not block and
> complete in very very short time scales because on some hardware you have
> exactly 9 bit times to recover the data byte and clear the IRQ done.
> Serial really stretches some of the low end embedded processors running
> at 56K/115200, and RS485 at 4Mbits even with 8 bytes of buffering is
> pretty tight. Thus you need very fast buffers for just about any use case.
> Dumb uarts you'll need to keep the existing ring buffer or similar
> (moving to a kfifo would slightly improve performance I think) and queue
> after.
>
>> >> - Convert a real driver/line discipline over to UART bus.
>> >
>> > That's going to be the real test, I recommend trying that as soon as
>> > possible as it will show where the real pain points are :)
>
> The locking. It's taken ten years to debug the current line discipline
> change locking. If you want to be able to switch stuff kernel side
> however it's somewhat easier.
>
> The change should be
>
> Add tty_port->rx(uint8_t *data,uint8_t *flags, unsigned int len)
>
> The semantics of tty_port->rx are
>
> - You may not assume a tty is bound to this port
> - You may be called in IRQ context, but are guaranteed not to get
>   parallel calls for the same port
> - When you return the bytes you passed are history
>
> At that point you can set tty_port->rx to point to the
> tty_flip_buffer_push() and everyone can use it. Slow ones will want to
> queue to a ring buffer then do tty_port->rx (where we do the flush_buffer
> now), fast ones will do the ->rx directly.

Other than doing DMA, I did not find any examples of UARTs doing
internal rx ring buffers. Most/all the non-DMA cases do
tty_insert_flip_char directly in the ISR. The flow is insert a series
of flags and characters as we process the receive status and then
trigger a flush of the buffer at the end. That doesn't match up with
what you are proposing for how tty_port->rx would work. That would
change the receive ISR processing in all the drivers quite a bit.
Either we'd have to call tty_port->rx a character at a time or
implement some temporary buffer. I don't think we want to call things
like BT receive code a byte at a time. This needs to be a layer
higher. flush_to_ldisc either needs to be duplicated to handle
tty_port->rx or generalized to call either tty_port->rx or ldisc
receive_buf. I'm not sure what to do about ldisc ref counting in the
latter case.

Rob

  parent reply	other threads:[~2016-08-25 16:41 UTC|newest]

Thread overview: 139+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-18  1:14 [RFC PATCH 0/3] UART slave device bus Rob Herring
2016-08-18  1:14 ` [RFC PATCH 1/3] uart bus: Introduce new bus for UART slave devices Rob Herring
2016-08-18  1:14 ` [RFC PATCH 2/3] tty: serial_core: make tty_struct optional Rob Herring
2016-08-18 10:50   ` Pavel Machek
2016-08-18  1:14 ` [RFC PATCH 3/3] tty: serial_core: add uart controller registration Rob Herring
2016-08-18 10:22 ` [RFC PATCH 0/3] UART slave device bus Greg Kroah-Hartman
2016-08-18 10:22   ` Greg Kroah-Hartman
2016-08-18 10:30   ` Marcel Holtmann
2016-08-18 10:53     ` Greg Kroah-Hartman
2016-08-18 13:53       ` Rob Herring
2016-08-18 13:15   ` Rob Herring
2016-08-18 15:04     ` One Thousand Gnomes
2016-08-18 18:33       ` Rob Herring
2016-08-19 11:03         ` One Thousand Gnomes
2016-08-25 16:40       ` Rob Herring [this message]
2016-08-25 16:40         ` Rob Herring
2016-08-26 13:12         ` One Thousand Gnomes
2016-08-18 10:39 ` H. Nikolaus Schaller
2016-08-18 10:39   ` H. Nikolaus Schaller
2016-08-18 10:47   ` Pavel Machek
2016-08-18 10:54     ` H. Nikolaus Schaller
2016-08-18 10:54       ` H. Nikolaus Schaller
2016-08-18 10:57       ` Greg Kroah-Hartman
2016-08-18 11:14         ` H. Nikolaus Schaller
2016-08-18 11:14           ` H. Nikolaus Schaller
2016-08-18 11:14           ` H. Nikolaus Schaller
2016-08-18 14:40         ` One Thousand Gnomes
2016-08-18 14:40           ` One Thousand Gnomes
2016-08-18 14:58           ` Greg Kroah-Hartman
2016-08-18 11:27     ` H. Nikolaus Schaller
2016-08-18 10:49   ` Marcel Holtmann
2016-08-18 10:55     ` Greg Kroah-Hartman
2016-08-18 11:01       ` Marcel Holtmann
2016-08-18 11:24         ` Greg Kroah-Hartman
2016-08-18 11:42           ` Pavel Machek
2016-08-18 11:42             ` Pavel Machek
2016-08-18 11:51           ` Marcel Holtmann
2016-08-18 11:51             ` Marcel Holtmann
2016-08-18 15:14             ` One Thousand Gnomes
2016-08-18 15:13           ` One Thousand Gnomes
2016-08-18 11:10       ` Pavel Machek
2016-08-18 11:18         ` H. Nikolaus Schaller
2016-08-18 11:49           ` Marcel Holtmann
2016-08-18 12:16             ` H. Nikolaus Schaller
2016-08-18 12:16               ` H. Nikolaus Schaller
2016-08-18 15:15             ` One Thousand Gnomes
2016-08-18 11:47         ` Marcel Holtmann
2016-08-18 13:01           ` Pavel Machek
2016-08-18 15:16           ` One Thousand Gnomes
2016-08-18 11:02     ` H. Nikolaus Schaller
2016-08-18 11:02       ` H. Nikolaus Schaller
2016-08-18 11:41       ` Marcel Holtmann
2016-08-18 12:07         ` H. Nikolaus Schaller
2016-08-18 12:07           ` H. Nikolaus Schaller
2016-08-18 12:07           ` H. Nikolaus Schaller
2016-08-18 11:02 ` Pavel Machek
2016-08-18 13:07 ` Linus Walleij
2016-08-18 17:31   ` Marcel Holtmann
2016-08-18 14:25 ` One Thousand Gnomes
2016-08-18 15:14   ` H. Nikolaus Schaller
2016-08-18 15:14     ` H. Nikolaus Schaller
2016-08-18 15:38     ` One Thousand Gnomes
2016-08-18 18:31       ` H. Nikolaus Schaller
2016-08-18 18:31         ` H. Nikolaus Schaller
2016-08-18 22:25   ` Rob Herring
2016-08-19 11:38     ` One Thousand Gnomes
2016-08-19 15:36       ` Sebastian Reichel
2016-08-18 20:29 ` Sebastian Reichel
2016-08-18 23:08   ` Rob Herring
2016-08-19  5:21     ` Sebastian Reichel
2016-08-19  7:29       ` H. Nikolaus Schaller
2016-08-19  7:49         ` Oleksij Rempel
2016-08-19  7:49           ` Oleksij Rempel
2016-08-19 17:50           ` H. Nikolaus Schaller
2016-08-19 20:19             ` Oleksij Rempel
2016-08-19 20:19               ` Oleksij Rempel
2016-08-20 13:34             ` One Thousand Gnomes
2016-08-21  7:50               ` H. Nikolaus Schaller
2016-08-21  7:50                 ` H. Nikolaus Schaller
2016-08-22 20:39                 ` Sebastian Reichel
2016-08-22 21:23                   ` H. Nikolaus Schaller
2016-08-22 21:43                     ` Arnd Bergmann
2016-08-22 22:42                     ` Sebastian Reichel
2016-08-22 22:52                       ` One Thousand Gnomes
2016-08-22 23:10                         ` Sebastian Reichel
2016-08-23  7:28                       ` H. Nikolaus Schaller
2016-08-27 12:01                     ` Michal Suchanek
2016-08-19 11:06         ` One Thousand Gnomes
2016-08-19 17:42           ` H. Nikolaus Schaller
2016-08-19 17:42             ` H. Nikolaus Schaller
2016-08-20 13:22             ` One Thousand Gnomes
2016-08-20 13:22               ` One Thousand Gnomes
2016-08-21  7:50               ` H. Nikolaus Schaller
2016-08-21  7:50                 ` H. Nikolaus Schaller
2016-08-21  7:50                 ` H. Nikolaus Schaller
2016-08-21 17:09                 ` One Thousand Gnomes
2016-08-21 18:23                   ` H. Nikolaus Schaller
2016-08-21 18:23                     ` H. Nikolaus Schaller
2016-08-22  9:09                     ` One Thousand Gnomes
2016-08-22  9:33                       ` Marcel Holtmann
2016-08-22  9:33                         ` Marcel Holtmann
2016-08-19 11:03       ` One Thousand Gnomes
2016-08-19 11:03         ` One Thousand Gnomes
2016-08-19 14:44         ` Sebastian Reichel
2016-08-22 12:37 ` Arnd Bergmann
2016-08-22 13:38   ` Rob Herring
2016-08-22 15:24     ` Arnd Bergmann
2016-08-22 15:28       ` Marcel Holtmann
2016-08-22 15:46         ` Arnd Bergmann
2016-08-22 15:45       ` One Thousand Gnomes
2016-08-22 21:07         ` Marcel Holtmann
2016-08-22 21:35           ` One Thousand Gnomes
2016-08-22 22:03           ` Sebastian Reichel
2016-08-22 22:46             ` One Thousand Gnomes
2016-08-22 23:41               ` Sebastian Reichel
2016-08-24 12:14         ` Linus Walleij
2016-08-22 16:44       ` Rob Herring
2016-08-22 17:02         ` One Thousand Gnomes
2016-08-22 17:30           ` Rob Herring
2016-08-22 17:30             ` Rob Herring
2016-08-22 17:38             ` One Thousand Gnomes
2016-08-22 21:16               ` Marcel Holtmann
2016-08-22 21:32                 ` One Thousand Gnomes
2016-08-22 22:00                   ` Pavel Machek
2016-08-22 22:54                     ` One Thousand Gnomes
2016-08-22 23:57                       ` Sebastian Reichel
2016-08-23  0:15                         ` One Thousand Gnomes
2016-08-23  0:57                           ` Sebastian Reichel
2016-08-24 13:57                             ` One Thousand Gnomes
2016-08-24 13:57                               ` One Thousand Gnomes
2016-08-24 14:29                               ` Marcel Holtmann
2016-08-24 14:29                                 ` Marcel Holtmann
2016-08-23 11:42                           ` Marcel Holtmann
2016-08-22 23:02                     ` Sebastian Reichel
2016-08-22 20:00             ` Sebastian Reichel
2016-08-22 22:00               ` Rob Herring
2016-08-22 22:00                 ` Rob Herring
2016-08-22 22:18                 ` Sebastian Reichel
2016-08-23 21:04       ` Rob Herring

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAL_JsqJ_5+Em5SLSuqiJ3SnazhyEY2XaYh6N1AtR8_+MdcezMg@mail.gmail.com \
    --to=robh@kernel.org \
    --cc=arnd@arndb.de \
    --cc=gnomes@lxorguk.ukuu.org.uk \
    --cc=gregkh@linuxfoundation.org \
    --cc=hns@goldelico.com \
    --cc=jslaby@suse.com \
    --cc=linus.walleij@linaro.org \
    --cc=linux-bluetooth@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-serial@vger.kernel.org \
    --cc=marcel@holtmann.org \
    --cc=neil@brown.name \
    --cc=pavel@ucw.cz \
    --cc=peter@hurleysoftware.com \
    --cc=sre@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.