All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] Support for reserving bandwidth on L2CAP socket
@ 2012-07-24  9:54 Manoj Kumar Sharma
  2012-07-24  9:54 ` [PATCH 1/2] Bluetooth: Usage of HCI channels in L2CAP Manoj Kumar Sharma
  2012-07-24 13:53 ` [PATCH 0/2] Support for reserving bandwidth on L2CAP socket Marcel Holtmann
  0 siblings, 2 replies; 13+ messages in thread
From: Manoj Kumar Sharma @ 2012-07-24  9:54 UTC (permalink / raw)
  To: linux-bluetooth; +Cc: Anurag Gupta, Manoj Kumar Sharma

These patches allows L2CAP socket user to reserve bandwidth in
percentage. Underlying socket reserves calculated number of
HCI credits for this L2CAP channel.

Manoj Kumar Sharma (2):
  Bluetooth: Usage of HCI channels in L2CAP
  Bluetooth: L2CAP socket option to reserve bandwidth

 include/net/bluetooth/bluetooth.h |    2 ++
 include/net/bluetooth/hci_core.h  |    2 ++
 include/net/bluetooth/l2cap.h     |    2 ++
 net/bluetooth/hci_conn.c          |   30 ++++++++++++++++++++++++++++++
 net/bluetooth/hci_core.c          |   27 ++++++++++++++++++++++++++-
 net/bluetooth/l2cap_core.c        |   12 +++++++++++-
 net/bluetooth/l2cap_sock.c        |   11 +++++++++++
 7 files changed, 84 insertions(+), 2 deletions(-)


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] Bluetooth: Usage of HCI channels in L2CAP
  2012-07-24  9:54 [PATCH 0/2] Support for reserving bandwidth on L2CAP socket Manoj Kumar Sharma
@ 2012-07-24  9:54 ` Manoj Kumar Sharma
  2012-07-24  9:55   ` [PATCH 2/2] Bluetooth: L2CAP socket option to reserve bandwidth Manoj Kumar Sharma
  2012-07-24 13:53 ` [PATCH 0/2] Support for reserving bandwidth on L2CAP socket Marcel Holtmann
  1 sibling, 1 reply; 13+ messages in thread
From: Manoj Kumar Sharma @ 2012-07-24  9:54 UTC (permalink / raw)
  To: linux-bluetooth; +Cc: Anurag Gupta, Manoj Kumar Sharma

An HCI channel should be used per L2CAP channel. This one to one
mapping would allow enforcing L2CAP channel features on related
HCI channel. Existing HCI channel will be used only for L2CAP
signalling.

Signed-off-by: Manoj Kumar Sharma <manojkr.sharma@stericsson.com>
---
 include/net/bluetooth/l2cap.h |    1 +
 net/bluetooth/l2cap_core.c    |    9 ++++++++-
 2 files changed, 9 insertions(+), 1 deletions(-)

diff --git a/include/net/bluetooth/l2cap.h b/include/net/bluetooth/l2cap.h
index 9b242c6..f26a468 100644
--- a/include/net/bluetooth/l2cap.h
+++ b/include/net/bluetooth/l2cap.h
@@ -410,6 +410,7 @@ struct l2cap_chan {
 	struct sock *sk;
 
 	struct l2cap_conn	*conn;
+	struct hci_chan	*hchan;
 
 	__u8		state;
 
diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
index 6f9c25b..ad6a65f 100644
--- a/net/bluetooth/l2cap_core.c
+++ b/net/bluetooth/l2cap_core.c
@@ -300,6 +300,8 @@ void l2cap_chan_destroy(struct l2cap_chan *chan)
 
 void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)
 {
+	struct hci_chan *hchan;
+
 	BT_DBG("conn %p, psm 0x%2.2x, dcid 0x%4.4x", conn,
 			chan->psm, chan->dcid);
 
@@ -342,6 +344,9 @@ void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)
 	chan->local_acc_lat	= L2CAP_DEFAULT_ACC_LAT;
 	chan->local_flush_to	= L2CAP_DEFAULT_FLUSH_TO;
 
+	hchan = hci_chan_create(conn->hcon);
+	chan->hchan = hchan;
+
 	l2cap_chan_hold(chan);
 
 	list_add(&chan->list, &conn->chan_l);
@@ -396,6 +401,8 @@ static void l2cap_chan_del(struct l2cap_chan *chan, int err)
 
 	skb_queue_purge(&chan->tx_q);
 
+	hci_chan_del(chan->hchan);
+
 	if (chan->mode == L2CAP_MODE_ERTM) {
 		struct srej_list *l, *tmp;
 
@@ -596,7 +603,7 @@ static void l2cap_do_send(struct l2cap_chan *chan, struct sk_buff *skb)
 		flags = ACL_START;
 
 	bt_cb(skb)->force_active = test_bit(FLAG_FORCE_ACTIVE, &chan->flags);
-	hci_send_acl(chan->conn->hchan, skb, flags);
+	hci_send_acl(chan->hchan, skb, flags);
 }
 
 static inline void l2cap_send_sframe(struct l2cap_chan *chan, u32 control)
-- 
1.6.6.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/2] Bluetooth: L2CAP socket option to reserve bandwidth
  2012-07-24  9:54 ` [PATCH 1/2] Bluetooth: Usage of HCI channels in L2CAP Manoj Kumar Sharma
@ 2012-07-24  9:55   ` Manoj Kumar Sharma
  0 siblings, 0 replies; 13+ messages in thread
From: Manoj Kumar Sharma @ 2012-07-24  9:55 UTC (permalink / raw)
  To: linux-bluetooth; +Cc: Anurag Gupta, Manoj Kumar Sharma

This patch adds a socket option for reserving percentage
bandwidth. This can be useful in reserving bandwidth for AVDTP
channels.

Signed-off-by: Manoj Kumar Sharma <manojkr.sharma@stericsson.com>
---
 include/net/bluetooth/bluetooth.h |    2 ++
 include/net/bluetooth/hci_core.h  |    2 ++
 include/net/bluetooth/l2cap.h     |    1 +
 net/bluetooth/hci_conn.c          |   30 ++++++++++++++++++++++++++++++
 net/bluetooth/hci_core.c          |   27 ++++++++++++++++++++++++++-
 net/bluetooth/l2cap_core.c        |    3 +++
 net/bluetooth/l2cap_sock.c        |   11 +++++++++++
 7 files changed, 75 insertions(+), 1 deletions(-)

diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
index a65910b..3d30fbb 100644
--- a/include/net/bluetooth/bluetooth.h
+++ b/include/net/bluetooth/bluetooth.h
@@ -118,6 +118,8 @@ int bt_err(const char *fmt, ...);
 #define BT_ERR(fmt, ...)	bt_err(fmt "\n", ##__VA_ARGS__)
 #define BT_DBG(fmt, ...)	pr_debug(fmt "\n", ##__VA_ARGS__)
 
+#define BT_BANDWIDTH_RESERVE	14
+
 /* Connection and socket states */
 enum {
 	BT_CONNECTED = 1, /* Equal to TCP_ESTABLISHED to make net code happy */
diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
index db1c5df..ceff6d6 100644
--- a/include/net/bluetooth/hci_core.h
+++ b/include/net/bluetooth/hci_core.h
@@ -346,6 +346,7 @@ struct hci_chan {
 	struct hci_conn *conn;
 	struct sk_buff_head data_q;
 	unsigned int	sent;
+	unsigned int	num_pkt_reserved;
 };
 
 extern struct list_head hci_dev_list;
@@ -567,6 +568,7 @@ void hci_conn_hash_flush(struct hci_dev *hdev);
 void hci_conn_check_pending(struct hci_dev *hdev);
 
 struct hci_chan *hci_chan_create(struct hci_conn *conn);
+int hci_chan_reserve_bandwidth(struct hci_chan *chan, __u8 percentage);
 int hci_chan_del(struct hci_chan *chan);
 void hci_chan_list_flush(struct hci_conn *conn);
 
diff --git a/include/net/bluetooth/l2cap.h b/include/net/bluetooth/l2cap.h
index f26a468..ec680f6 100644
--- a/include/net/bluetooth/l2cap.h
+++ b/include/net/bluetooth/l2cap.h
@@ -482,6 +482,7 @@ struct l2cap_chan {
 	__u32		remote_sdu_itime;
 	__u32		remote_acc_lat;
 	__u32		remote_flush_to;
+	__u8		bandwidth_reserved;
 
 	struct delayed_work	chan_timer;
 	struct delayed_work	retrans_timer;
diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
index 5238b6b..a211c6c 100644
--- a/net/bluetooth/hci_conn.c
+++ b/net/bluetooth/hci_conn.c
@@ -954,6 +954,36 @@ struct hci_chan *hci_chan_create(struct hci_conn *conn)
 	return chan;
 }
 
+int hci_chan_reserve_bandwidth(struct hci_chan *chan, __u8 percentage)
+{
+	struct hci_dev * hdev = chan->conn->hdev;
+	unsigned int num_pkts_requested = (hdev->acl_pkts * percentage + 50) / 100;
+	struct hci_conn_hash *h = &hdev->conn_hash;
+	struct hci_conn *conn, *n;
+	unsigned int total_pkts_reserved = 0;
+	unsigned int req_pkts_reserved = 0;
+
+	BT_DBG("Packets reserved calculated %d", num_pkts_requested);
+
+	list_for_each_entry_safe(conn, n, &h->list, list) {
+		struct hci_chan *tmp, *ch;
+
+		list_for_each_entry_safe(tmp, ch, &conn->chan_list, list)
+			total_pkts_reserved += tmp->num_pkt_reserved;
+	}
+
+	req_pkts_reserved = total_pkts_reserved
+							+ num_pkts_requested - chan->num_pkt_reserved;
+
+	if (req_pkts_reserved >= hdev->acl_pkts)
+		return -EBUSY;
+
+	chan->num_pkt_reserved = num_pkts_requested;
+
+	return 0;
+}
+EXPORT_SYMBOL(hci_chan_reserve_bandwidth);
+
 int hci_chan_del(struct hci_chan *chan)
 {
 	struct hci_conn *conn = chan->conn;
diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
index d6dc44c..bf365f1 100644
--- a/net/bluetooth/hci_core.c
+++ b/net/bluetooth/hci_core.c
@@ -2393,10 +2393,11 @@ static inline struct hci_chan *hci_chan_sent(struct hci_dev *hdev, __u8 type,
 						int *quote)
 {
 	struct hci_conn_hash *h = &hdev->conn_hash;
-	struct hci_chan *chan = NULL;
+	struct hci_chan *chan = NULL, *chan_reserved = NULL;
 	int num = 0, min = ~0, cur_prio = 0;
 	struct hci_conn *conn;
 	int cnt, q, conn_num = 0;
+	int num_pkt_reserved = 0;
 
 	BT_DBG("%s", hdev->name);
 
@@ -2419,6 +2420,11 @@ static inline struct hci_chan *hci_chan_sent(struct hci_dev *hdev, __u8 type,
 			if (skb_queue_empty(&tmp->data_q))
 				continue;
 
+			num_pkt_reserved += tmp->num_pkt_reserved;
+			
+			if (tmp->num_pkt_reserved)
+				chan_reserved = tmp;
+
 			skb = skb_peek(&tmp->data_q);
 			if (skb->priority < cur_prio)
 				continue;
@@ -2464,6 +2470,25 @@ static inline struct hci_chan *hci_chan_sent(struct hci_dev *hdev, __u8 type,
 
 	q = cnt / num;
 	*quote = q ? q : 1;
+	if (num_pkt_reserved && chan_reserved &&
+		(chan_reserved->conn->sent < num_pkt_reserved)) {
+		if (!chan->num_pkt_reserved) {
+			int unreserved_credits_left = cnt - num_pkt_reserved;
+
+			if (*quote > unreserved_credits_left)
+				*quote = unreserved_credits_left > 0 ?
+							unreserved_credits_left : 0;
+
+			if (!*quote) {
+				/* Send the packet of reserved channel,
+				 * since bandwidth is not available for sending
+				 * any packets on other channel.
+				 */
+				chan = chan_reserved;
+				*quote = q ? q : 1;
+			}
+		}
+	}
 	BT_DBG("chan %p quote %d", chan, *quote);
 	return chan;
 }
diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
index ad6a65f..97bc657 100644
--- a/net/bluetooth/l2cap_core.c
+++ b/net/bluetooth/l2cap_core.c
@@ -347,6 +347,9 @@ void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)
 	hchan = hci_chan_create(conn->hcon);
 	chan->hchan = hchan;
 
+	if (hchan)
+		hci_chan_reserve_bandwidth(hchan, chan->bandwidth_reserved);
+
 	l2cap_chan_hold(chan);
 
 	list_add(&chan->list, &conn->chan_l);
diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
index 04e7c17..458aca4 100644
--- a/net/bluetooth/l2cap_sock.c
+++ b/net/bluetooth/l2cap_sock.c
@@ -692,6 +692,17 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname, ch
 		chan->chan_policy = (u8) opt;
 		break;
 
+	case BT_BANDWIDTH_RESERVE:
+		if (get_user(opt, (u32 __user *) optval)) {
+			err = -EFAULT;
+			break;
+		}
+		chan->bandwidth_reserved = opt;
+
+		if (chan->hchan)
+			hci_chan_reserve_bandwidth(chan->hchan, chan->bandwidth_reserved);
+		break;
+
 	default:
 		err = -ENOPROTOOPT;
 		break;
-- 
1.6.6.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] Support for reserving bandwidth on L2CAP socket
  2012-07-24  9:54 [PATCH 0/2] Support for reserving bandwidth on L2CAP socket Manoj Kumar Sharma
  2012-07-24  9:54 ` [PATCH 1/2] Bluetooth: Usage of HCI channels in L2CAP Manoj Kumar Sharma
@ 2012-07-24 13:53 ` Marcel Holtmann
  2012-07-27  5:14   ` Manoj Sharma
  1 sibling, 1 reply; 13+ messages in thread
From: Marcel Holtmann @ 2012-07-24 13:53 UTC (permalink / raw)
  To: Manoj Kumar Sharma; +Cc: linux-bluetooth, Anurag Gupta

Hi Manoj,

> These patches allows L2CAP socket user to reserve bandwidth in
> percentage. Underlying socket reserves calculated number of
> HCI credits for this L2CAP channel.

this description is by far not enough. Explain why you are doing this.
And make it a detailed description. Preferable with hcidump traces
showing why this makes a difference at all.

Regards

Marcel



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] Support for reserving bandwidth on L2CAP socket
  2012-07-24 13:53 ` [PATCH 0/2] Support for reserving bandwidth on L2CAP socket Marcel Holtmann
@ 2012-07-27  5:14   ` Manoj Sharma
  2012-07-27 14:37     ` Marcel Holtmann
  0 siblings, 1 reply; 13+ messages in thread
From: Manoj Sharma @ 2012-07-27  5:14 UTC (permalink / raw)
  To: Marcel Holtmann; +Cc: linux-bluetooth, Anurag Gupta

Hi Marcel,

On 7/24/12, Marcel Holtmann <marcel@holtmann.org> wrote:
> Hi Manoj,
>
>> These patches allows L2CAP socket user to reserve bandwidth in
>> percentage. Underlying socket reserves calculated number of
>> HCI credits for this L2CAP channel.
>
> this description is by far not enough. Explain why you are doing this.
> And make it a detailed description. Preferable with hcidump traces
> showing why this makes a difference at all.
>
This patch simply adds an additional L2CAP socket option for reserving
bandwidth.
The reserved bandwidth would result into reserving calculated number
of ACL data credits. Thus the L2CAP channels without this option set
would not be able to use all available ACL buffers in controller. This
would ensure that whenever an L2CAP channel with this option set has
data to send, it does not starve or wait because of other channels
already using all controller buffers.

Above explanation is most suitable in case when simultaneous AVDTP
streaming channels and other channels (e.g. OPP, PBAP etc) are in
action. Such an arrangement for reserving credits would allow AVDTP
stream to flow to controller without any obstacle from simultaneous
traffic and help removing glitches in music streaming over Bluetooth.
Please suggest if this description is sufficient and if I should push
patch-set again.
> Regards
>
> Marcel
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bluetooth"
> in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

Regards,
Manoj

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] Support for reserving bandwidth on L2CAP socket
  2012-07-27  5:14   ` Manoj Sharma
@ 2012-07-27 14:37     ` Marcel Holtmann
  2012-07-30  6:30       ` Manoj Sharma
  0 siblings, 1 reply; 13+ messages in thread
From: Marcel Holtmann @ 2012-07-27 14:37 UTC (permalink / raw)
  To: Manoj Sharma; +Cc: linux-bluetooth, Anurag Gupta

Hi Manoj,

> >> These patches allows L2CAP socket user to reserve bandwidth in
> >> percentage. Underlying socket reserves calculated number of
> >> HCI credits for this L2CAP channel.
> >
> > this description is by far not enough. Explain why you are doing this.
> > And make it a detailed description. Preferable with hcidump traces
> > showing why this makes a difference at all.
> >
> This patch simply adds an additional L2CAP socket option for reserving
> bandwidth.
> The reserved bandwidth would result into reserving calculated number
> of ACL data credits. Thus the L2CAP channels without this option set
> would not be able to use all available ACL buffers in controller. This
> would ensure that whenever an L2CAP channel with this option set has
> data to send, it does not starve or wait because of other channels
> already using all controller buffers.
> 
> Above explanation is most suitable in case when simultaneous AVDTP
> streaming channels and other channels (e.g. OPP, PBAP etc) are in
> action. Such an arrangement for reserving credits would allow AVDTP
> stream to flow to controller without any obstacle from simultaneous
> traffic and help removing glitches in music streaming over Bluetooth.
> Please suggest if this description is sufficient and if I should push
> patch-set again.

and what is the problem with using SO_PRIORITY for this?

Regards

Marcel



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] Support for reserving bandwidth on L2CAP socket
  2012-07-27 14:37     ` Marcel Holtmann
@ 2012-07-30  6:30       ` Manoj Sharma
  2012-07-30 16:11         ` Luiz Augusto von Dentz
  0 siblings, 1 reply; 13+ messages in thread
From: Manoj Sharma @ 2012-07-30  6:30 UTC (permalink / raw)
  To: Marcel Holtmann; +Cc: linux-bluetooth, Anurag Gupta

Hi Marcel,

On 7/27/12, Marcel Holtmann <marcel@holtmann.org> wrote:
> Hi Manoj,
>
>> >> These patches allows L2CAP socket user to reserve bandwidth in
>> >> percentage. Underlying socket reserves calculated number of
>> >> HCI credits for this L2CAP channel.
>> >
>> > this description is by far not enough. Explain why you are doing this.
>> > And make it a detailed description. Preferable with hcidump traces
>> > showing why this makes a difference at all.
>> >
>> This patch simply adds an additional L2CAP socket option for reserving
>> bandwidth.
>> The reserved bandwidth would result into reserving calculated number
>> of ACL data credits. Thus the L2CAP channels without this option set
>> would not be able to use all available ACL buffers in controller. This
>> would ensure that whenever an L2CAP channel with this option set has
>> data to send, it does not starve or wait because of other channels
>> already using all controller buffers.
>>
>> Above explanation is most suitable in case when simultaneous AVDTP
>> streaming channels and other channels (e.g. OPP, PBAP etc) are in
>> action. Such an arrangement for reserving credits would allow AVDTP
>> stream to flow to controller without any obstacle from simultaneous
>> traffic and help removing glitches in music streaming over Bluetooth.
>> Please suggest if this description is sufficient and if I should push
>> patch-set again.
>
> and what is the problem with using SO_PRIORITY for this?
>
One problem which I have faced using SO_PRIORITY is explained below.

Suppose we have 2 links A & B and link A has higher priority than link
B. And outgoing data transfer is active on both links. Now if device
on link A goes far, there would be lot of failures and number of
re-transmissions would increase for link A. Consequently at any time
host would have significant number of packets for link A, getting
accumulated due to poor quality of link.But since link A packets have
higher priority, link B packets would suffer infinitely as long as
link A packet queue in host is non-empty. Thus link B protocols may
fail due to timers expiring and finally disconnection at upper layers.

Second problem:
We have two links similar to above scenario. Say link A is being used
by AVDTP and link B is being used by OBEX. Host can come across a
situation where all controller buffers are used by OBEX and AVDTP is
waiting for a free buffer. Now due to some reason (e.g. distance) OBEX
link B goes weak. This results into delay in transmission of OBEX
packets already held by controller and consequently AVDTP packets also
get delayed which causes glitches in music streaming and user
experience goes bad.

These are the basic problems which I have faced and hence felt
necessity of a similar but different mechanism and came up with this
solution. This solution fixes both of the problems explained above.
Based on the explanation above your suggestion is required further.

> Regards
>
> Marcel
>
>
>
Best regards,
Manoj

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] Support for reserving bandwidth on L2CAP socket
  2012-07-30  6:30       ` Manoj Sharma
@ 2012-07-30 16:11         ` Luiz Augusto von Dentz
  2012-07-31 11:30           ` Manoj Sharma
  0 siblings, 1 reply; 13+ messages in thread
From: Luiz Augusto von Dentz @ 2012-07-30 16:11 UTC (permalink / raw)
  To: Manoj Sharma; +Cc: Marcel Holtmann, linux-bluetooth, Anurag Gupta

Hi Manoj,

On Mon, Jul 30, 2012 at 9:30 AM, Manoj Sharma <ursmanoj@gmail.com> wrote:
> One problem which I have faced using SO_PRIORITY is explained below.
>
> Suppose we have 2 links A & B and link A has higher priority than link
> B. And outgoing data transfer is active on both links. Now if device
> on link A goes far, there would be lot of failures and number of
> re-transmissions would increase for link A. Consequently at any time
> host would have significant number of packets for link A, getting
> accumulated due to poor quality of link.But since link A packets have
> higher priority, link B packets would suffer infinitely as long as
> link A packet queue in host is non-empty. Thus link B protocols may
> fail due to timers expiring and finally disconnection at upper layers.

There is a mechanism to avoid starvation, also apparently you didn't
study the code since the priority is per L2CAP channel not per link so
we are able to prioritize per profile.

> Second problem:
> We have two links similar to above scenario. Say link A is being used
> by AVDTP and link B is being used by OBEX. Host can come across a
> situation where all controller buffers are used by OBEX and AVDTP is
> waiting for a free buffer. Now due to some reason (e.g. distance) OBEX
> link B goes weak. This results into delay in transmission of OBEX
> packets already held by controller and consequently AVDTP packets also
> get delayed which causes glitches in music streaming and user
> experience goes bad.

That is exactly what SO_PRIORITY has fixed, by setting SO_PRIORITY you
prioritize AVDTP stream over OBEX which means AVDTP can use a bigger
part of the bandwidth while OBEX uses the remaining.

The credit based algorithmic actually complicates more than solves the
problems here because it should actually fail if there is no enough
bandwidth as requested, so we would actually need to query how much
credits are available, also any type of bandwidth reservation might be
overkill with things like variable bit rate where you actually need to
know what is maximum possible bandwidth you need to reserve before
hand and that credits cannot be reserved by anyone else.

> These are the basic problems which I have faced and hence felt
> necessity of a similar but different mechanism and came up with this
> solution. This solution fixes both of the problems explained above.
> Based on the explanation above your suggestion is required further.

Could you please show us what system did you find this problem? We
could possible help you trying to figure out what is going wrong,
please note that SO_PRIORITY support was introduced in 3.0 and some
system don't actually use it, in fact so far I think only PulseAudio
make use of it.


-- 
Luiz Augusto von Dentz

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] Support for reserving bandwidth on L2CAP socket
  2012-07-30 16:11         ` Luiz Augusto von Dentz
@ 2012-07-31 11:30           ` Manoj Sharma
  2012-07-31 14:10             ` Luiz Augusto von Dentz
  0 siblings, 1 reply; 13+ messages in thread
From: Manoj Sharma @ 2012-07-31 11:30 UTC (permalink / raw)
  To: Luiz Augusto von Dentz; +Cc: Marcel Holtmann, linux-bluetooth, Anurag Gupta

Hi Luiz,

On 7/30/12, Luiz Augusto von Dentz <luiz.dentz@gmail.com> wrote:
> Hi Manoj,
>
> On Mon, Jul 30, 2012 at 9:30 AM, Manoj Sharma <ursmanoj@gmail.com> wrote:
>> One problem which I have faced using SO_PRIORITY is explained below.
>>
>> Suppose we have 2 links A & B and link A has higher priority than link
>> B. And outgoing data transfer is active on both links. Now if device
>> on link A goes far, there would be lot of failures and number of
>> re-transmissions would increase for link A. Consequently at any time
>> host would have significant number of packets for link A, getting
>> accumulated due to poor quality of link.But since link A packets have
>> higher priority, link B packets would suffer infinitely as long as
>> link A packet queue in host is non-empty. Thus link B protocols may
>> fail due to timers expiring and finally disconnection at upper layers.
>
> There is a mechanism to avoid starvation, also apparently you didn't
> study the code since the priority is per L2CAP channel not per link so
> we are able to prioritize per profile.
>
I would check how starvation is avoided. But for your information I
did observe starvation practically. And I know that priority is per
L2CAP. I mentioned links based on assumption that AVDTP and OBEX are
connected with different devices. Hence priority would result into
priority of connections in such case ;).

>> Second problem:
>> We have two links similar to above scenario. Say link A is being used
>> by AVDTP and link B is being used by OBEX. Host can come across a
>> situation where all controller buffers are used by OBEX and AVDTP is
>> waiting for a free buffer. Now due to some reason (e.g. distance) OBEX
>> link B goes weak. This results into delay in transmission of OBEX
>> packets already held by controller and consequently AVDTP packets also
>> get delayed which causes glitches in music streaming and user
>> experience goes bad.
>
> That is exactly what SO_PRIORITY has fixed, by setting SO_PRIORITY you
> prioritize AVDTP stream over OBEX which means AVDTP can use a bigger
> part of the bandwidth while OBEX uses the remaining.
>
I disagree. Please try to understand the situation I explained again.
There can be a scenario when host has only OBEX packets and no AVDTP,
here irrespective of which channel has what priority OBEX may consume
all ACL credits. At the same moment OBEX link goes weak (e.g.due to
distance), this would delay the transmission of all OBEX packets held
by controller. In the mean time, AVDTP packets reach Bluez but since
there are no credits left, host would have to delay transmission of
AVDTP until a OBEX packet is transferred and an NOCP is received. This
would definitely cause a glitch on AVDTP streaming and end user
experience would go bad. By reserving credits for AVDTP channel, we
ensure that OBEX packets doesnt eat up all credits while AVDTP packets
were absent.

> The credit based algorithmic actually complicates more than solves the
> problems here because it should actually fail if there is no enough
> bandwidth as requested, so we would actually need to query how much
> credits are available, also any type of bandwidth reservation might be
> overkill with things like variable bit rate where you actually need to
> know what is maximum possible bandwidth you need to reserve before
> hand and that credits cannot be reserved by anyone else.
>
I agree, but we can provide a mechanism to allow only one channel to
reserve bandwidth. In most cases it would be AVDTP streaming channel.
Reserving at least one credit would allow preventing cases where
non-AVDTP channel eats all credits due to unavailability of AVDTP
packets. Please mind that since OBEX packets would be reaching bluez
much faster than AVDTP, such situation may arise very easily.

>> These are the basic problems which I have faced and hence felt
>> necessity of a similar but different mechanism and came up with this
>> solution. This solution fixes both of the problems explained above.
>> Based on the explanation above your suggestion is required further.
>
> Could you please show us what system did you find this problem? We
> could possible help you trying to figure out what is going wrong,
> please note that SO_PRIORITY support was introduced in 3.0 and some
> system don't actually use it, in fact so far I think only PulseAudio
> make use of it.
>
Yes, but we forced Bluez AVDTP to use SO_PRIORITY on our system and
faced the starvation problem explained above. Though I am going to
study the priority patch again.
>
> --
> Luiz Augusto von Dentz
>
Thanks & best regards,
Manoj

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] Support for reserving bandwidth on L2CAP socket
  2012-07-31 11:30           ` Manoj Sharma
@ 2012-07-31 14:10             ` Luiz Augusto von Dentz
  2012-07-31 16:58               ` Mat Martineau
  0 siblings, 1 reply; 13+ messages in thread
From: Luiz Augusto von Dentz @ 2012-07-31 14:10 UTC (permalink / raw)
  To: Manoj Sharma; +Cc: Marcel Holtmann, linux-bluetooth, Anurag Gupta

Hi Manoj,

On Tue, Jul 31, 2012 at 2:30 PM, Manoj Sharma <ursmanoj@gmail.com> wrote:
> Hi Luiz,
>
> On 7/30/12, Luiz Augusto von Dentz <luiz.dentz@gmail.com> wrote:
>> Hi Manoj,
>>
>> On Mon, Jul 30, 2012 at 9:30 AM, Manoj Sharma <ursmanoj@gmail.com> wrote:
>>> One problem which I have faced using SO_PRIORITY is explained below.
>>>
>>> Suppose we have 2 links A & B and link A has higher priority than link
>>> B. And outgoing data transfer is active on both links. Now if device
>>> on link A goes far, there would be lot of failures and number of
>>> re-transmissions would increase for link A. Consequently at any time
>>> host would have significant number of packets for link A, getting
>>> accumulated due to poor quality of link.But since link A packets have
>>> higher priority, link B packets would suffer infinitely as long as
>>> link A packet queue in host is non-empty. Thus link B protocols may
>>> fail due to timers expiring and finally disconnection at upper layers.
>>
>> There is a mechanism to avoid starvation, also apparently you didn't
>> study the code since the priority is per L2CAP channel not per link so
>> we are able to prioritize per profile.
>>
> I would check how starvation is avoided. But for your information I
> did observe starvation practically. And I know that priority is per
> L2CAP. I mentioned links based on assumption that AVDTP and OBEX are
> connected with different devices. Hence priority would result into
> priority of connections in such case ;).

There is no such thing of prioritize a connection, the algorithm used
always check every channel of each connection and prioritize the
channel. Maybe you are confusing what some controllers do, the
controller has no idea what L2CAP channel has been configured it only
knows about the ACL connections.

>>> Second problem:
>>> We have two links similar to above scenario. Say link A is being used
>>> by AVDTP and link B is being used by OBEX. Host can come across a
>>> situation where all controller buffers are used by OBEX and AVDTP is
>>> waiting for a free buffer. Now due to some reason (e.g. distance) OBEX
>>> link B goes weak. This results into delay in transmission of OBEX
>>> packets already held by controller and consequently AVDTP packets also
>>> get delayed which causes glitches in music streaming and user
>>> experience goes bad.
>>
>> That is exactly what SO_PRIORITY has fixed, by setting SO_PRIORITY you
>> prioritize AVDTP stream over OBEX which means AVDTP can use a bigger
>> part of the bandwidth while OBEX uses the remaining.
>>
> I disagree. Please try to understand the situation I explained again.
> There can be a scenario when host has only OBEX packets and no AVDTP,
> here irrespective of which channel has what priority OBEX may consume
> all ACL credits. At the same moment OBEX link goes weak (e.g.due to
> distance), this would delay the transmission of all OBEX packets held
> by controller. In the mean time, AVDTP packets reach Bluez but since
> there are no credits left, host would have to delay transmission of
> AVDTP until a OBEX packet is transferred and an NOCP is received. This
> would definitely cause a glitch on AVDTP streaming and end user
> experience would go bad. By reserving credits for AVDTP channel, we
> ensure that OBEX packets doesnt eat up all credits while AVDTP packets
> were absent.

Without the use of guaranteed channels you cannot really guarantee
anything, besides this would throttle OBEX transfer even when there is
nothing streaming on AVDTP which I don't thing is acceptable. Also Ive
never experience such a problem, you can start streaming while
transferring something and that never produced any artifacts in the
headsets I have, the only problem we have right now is paging another
device while AVDTP stream is active may cause some audio glitches and
even that could be avoided by tuning paging parameters while there is
a high priority channel active.

Btw, there is some lack of connection to the code, an OBEX packet
could be quite big but that is not transmitted as it is, it is
actually fragmented into L2CAP and then HCI frames, the HCI frames is
the one being sent to the controller, the moment the AVDTP socket
start producing another socket may be using some/all of the controller
buffers e.g. 8:1021 that is at most 8K bytes of latency to startup the
stream, in fact it is pretty common to audio to have some latency.

>> The credit based algorithmic actually complicates more than solves the
>> problems here because it should actually fail if there is no enough
>> bandwidth as requested, so we would actually need to query how much
>> credits are available, also any type of bandwidth reservation might be
>> overkill with things like variable bit rate where you actually need to
>> know what is maximum possible bandwidth you need to reserve before
>> hand and that credits cannot be reserved by anyone else.
>>
> I agree, but we can provide a mechanism to allow only one channel to
> reserve bandwidth. In most cases it would be AVDTP streaming channel.
> Reserving at least one credit would allow preventing cases where
> non-AVDTP channel eats all credits due to unavailability of AVDTP
> packets. Please mind that since OBEX packets would be reaching bluez
> much faster than AVDTP, such situation may arise very easily.

That raises my suspicious that you are not really testing against
PulseAudio and obexd, PA should be sending packets much faster than
obexd since its IO threads are realtime so it will most likely have
higher priority. Also the latency of OBEX packets are much greater as
each packet is normally 32k-64k compared to AVDTP stream which send
each packet individually (~700 bytes depending on the MTU).

>>> These are the basic problems which I have faced and hence felt
>>> necessity of a similar but different mechanism and came up with this
>>> solution. This solution fixes both of the problems explained above.
>>> Based on the explanation above your suggestion is required further.
>>
>> Could you please show us what system did you find this problem? We
>> could possible help you trying to figure out what is going wrong,
>> please note that SO_PRIORITY support was introduced in 3.0 and some
>> system don't actually use it, in fact so far I think only PulseAudio
>> make use of it.
>>
> Yes, but we forced Bluez AVDTP to use SO_PRIORITY on our system and
> faced the starvation problem explained above. Though I am going to
> study the priority patch again.

Im afraid the problem is not SO_PRIORITY but your audio subsystem
cannot keep up the socket buffer non-empty that would avoid OBEX
taking too much bandwidth, but again that is pretty strange as you
should be written much more frequently to the AVDTP socket to keep the
latency of the audio constant.

-- 
Luiz Augusto von Dentz

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] Support for reserving bandwidth on L2CAP socket
  2012-07-31 14:10             ` Luiz Augusto von Dentz
@ 2012-07-31 16:58               ` Mat Martineau
  2012-08-01 17:26                 ` Luiz Augusto von Dentz
  0 siblings, 1 reply; 13+ messages in thread
From: Mat Martineau @ 2012-07-31 16:58 UTC (permalink / raw)
  To: Manoj Sharma, Luiz Augusto von Dentz
  Cc: Marcel Holtmann, linux-bluetooth, Anurag Gupta


Manoj and Luiz -

On Tue, 31 Jul 2012, Luiz Augusto von Dentz wrote:

> Hi Manoj,
>
> On Tue, Jul 31, 2012 at 2:30 PM, Manoj Sharma <ursmanoj@gmail.com> wrote:
>> Hi Luiz,
>>
>> On 7/30/12, Luiz Augusto von Dentz <luiz.dentz@gmail.com> wrote:
>>> Hi Manoj,
>>>
>>> On Mon, Jul 30, 2012 at 9:30 AM, Manoj Sharma <ursmanoj@gmail.com> wrote:
>>>> One problem which I have faced using SO_PRIORITY is explained below.
>>>>
>>>> Suppose we have 2 links A & B and link A has higher priority than link
>>>> B. And outgoing data transfer is active on both links. Now if device
>>>> on link A goes far, there would be lot of failures and number of
>>>> re-transmissions would increase for link A. Consequently at any time
>>>> host would have significant number of packets for link A, getting
>>>> accumulated due to poor quality of link.But since link A packets have
>>>> higher priority, link B packets would suffer infinitely as long as
>>>> link A packet queue in host is non-empty. Thus link B protocols may
>>>> fail due to timers expiring and finally disconnection at upper layers.
>>>
>>> There is a mechanism to avoid starvation, also apparently you didn't
>>> study the code since the priority is per L2CAP channel not per link so
>>> we are able to prioritize per profile.
>>>
>> I would check how starvation is avoided. But for your information I
>> did observe starvation practically. And I know that priority is per
>> L2CAP. I mentioned links based on assumption that AVDTP and OBEX are
>> connected with different devices. Hence priority would result into
>> priority of connections in such case ;).
>
> There is no such thing of prioritize a connection, the algorithm used
> always check every channel of each connection and prioritize the
> channel. Maybe you are confusing what some controllers do, the
> controller has no idea what L2CAP channel has been configured it only
> knows about the ACL connections.

The current starvation avoidance algorithm works well as long as there 
is data queued for all of the L2CAP channels and the controller is 
sending packets regularly.  It does seem to break down when the 
controller queue gets stuck (see below).

>>>> Second problem:
>>>> We have two links similar to above scenario. Say link A is being used
>>>> by AVDTP and link B is being used by OBEX. Host can come across a
>>>> situation where all controller buffers are used by OBEX and AVDTP is
>>>> waiting for a free buffer. Now due to some reason (e.g. distance) OBEX
>>>> link B goes weak. This results into delay in transmission of OBEX
>>>> packets already held by controller and consequently AVDTP packets also
>>>> get delayed which causes glitches in music streaming and user
>>>> experience goes bad.
>>>
>>> That is exactly what SO_PRIORITY has fixed, by setting SO_PRIORITY you
>>> prioritize AVDTP stream over OBEX which means AVDTP can use a bigger
>>> part of the bandwidth while OBEX uses the remaining.
>>>
>> I disagree. Please try to understand the situation I explained again.
>> There can be a scenario when host has only OBEX packets and no AVDTP,
>> here irrespective of which channel has what priority OBEX may consume
>> all ACL credits. At the same moment OBEX link goes weak (e.g.due to
>> distance), this would delay the transmission of all OBEX packets held
>> by controller. In the mean time, AVDTP packets reach Bluez but since
>> there are no credits left, host would have to delay transmission of
>> AVDTP until a OBEX packet is transferred and an NOCP is received. This
>> would definitely cause a glitch on AVDTP streaming and end user
>> experience would go bad. By reserving credits for AVDTP channel, we
>> ensure that OBEX packets doesnt eat up all credits while AVDTP packets
>> were absent.
>
> Without the use of guaranteed channels you cannot really guarantee
> anything, besides this would throttle OBEX transfer even when there is
> nothing streaming on AVDTP which I don't thing is acceptable. Also Ive
> never experience such a problem, you can start streaming while
> transferring something and that never produced any artifacts in the
> headsets I have, the only problem we have right now is paging another
> device while AVDTP stream is active may cause some audio glitches and
> even that could be avoided by tuning paging parameters while there is
> a high priority channel active.
>
> Btw, there is some lack of connection to the code, an OBEX packet
> could be quite big but that is not transmitted as it is, it is
> actually fragmented into L2CAP and then HCI frames, the HCI frames is
> the one being sent to the controller, the moment the AVDTP socket
> start producing another socket may be using some/all of the controller
> buffers e.g. 8:1021 that is at most 8K bytes of latency to startup the
> stream, in fact it is pretty common to audio to have some latency.

It's true that the amount of latency due to buffering in the 
controller is minimal, and would not in itself cause streaming issues 
*if* the controller is able to consume packets from the buffer.

I do see where a bad situation arises when the OBEX connection is 
stalled and only queued OBEX data is available to the host stack HCI 
scheduler at that instant.  In that case, the controller queue could 
be completely consumed by data for the stalled channel no matter what 
the priorities are.  This could even happen when audio data is passed 
to the socket at exactly the right time.

If you're using OBEX-over-L2CAP, this could be partially addressed by 
setting a flush timeout.  However, it would still be possible to fill 
the buffers with OBEX packets because non-flushable ERTM s-frames 
would accumulate in the controller buffer.

For traditional OBEX-over-RFCOMM, an OBEX packet sizes that are 
smaller than the controller buffer could help.  This is a tradeoff 
against throughput.  It could work to send smaller OBEX packets when 
an AVDTP stream is active, even if a larger OBEX MTU was negotiated.

It would be a big help if Manoj could post kernel logs showing us how 
the HCI scheduler is actually behaving in the problem case.


While I'm convinced that a problem exists here, I think it can be 
addressed using existing interfaces instead of adding a new one.  For 
example, it may be reasonable to not fully utilize the controller 
buffer with data for just one ACL, or to use priority when looking at 
controller buffer utilization.  Maybe an ACL could use all but one 
slot in the controller buffer, maybe it could only use half if there 
are multiple ACLs open.  I don't think it would throttle throughput 
unless the system was so heavily loaded that it couldn't respond to 
number-of-completed-packets in time at BR/EDR data rates, and in that 
case there are bigger problems.  It's pretty easy to test with 
hdev->acl_pkts set to different values to see if using less of the 
buffer impacts throughput.

Right now, one stalled ACL disrupts all data flow through the 
controller.  And that seems like a problem worth solving.


>>> The credit based algorithmic actually complicates more than solves the
>>> problems here because it should actually fail if there is no enough
>>> bandwidth as requested, so we would actually need to query how much
>>> credits are available, also any type of bandwidth reservation might be
>>> overkill with things like variable bit rate where you actually need to
>>> know what is maximum possible bandwidth you need to reserve before
>>> hand and that credits cannot be reserved by anyone else.
>>>
>> I agree, but we can provide a mechanism to allow only one channel to
>> reserve bandwidth. In most cases it would be AVDTP streaming channel.
>> Reserving at least one credit would allow preventing cases where
>> non-AVDTP channel eats all credits due to unavailability of AVDTP
>> packets. Please mind that since OBEX packets would be reaching bluez
>> much faster than AVDTP, such situation may arise very easily.
>
> That raises my suspicious that you are not really testing against
> PulseAudio and obexd, PA should be sending packets much faster than
> obexd since its IO threads are realtime so it will most likely have
> higher priority. Also the latency of OBEX packets are much greater as
> each packet is normally 32k-64k compared to AVDTP stream which send
> each packet individually (~700 bytes depending on the MTU).
>
>>>> These are the basic problems which I have faced and hence felt
>>>> necessity of a similar but different mechanism and came up with this
>>>> solution. This solution fixes both of the problems explained above.
>>>> Based on the explanation above your suggestion is required further.
>>>
>>> Could you please show us what system did you find this problem? We
>>> could possible help you trying to figure out what is going wrong,
>>> please note that SO_PRIORITY support was introduced in 3.0 and some
>>> system don't actually use it, in fact so far I think only PulseAudio
>>> make use of it.
>>>
>> Yes, but we forced Bluez AVDTP to use SO_PRIORITY on our system and
>> faced the starvation problem explained above. Though I am going to
>> study the priority patch again.
>
> Im afraid the problem is not SO_PRIORITY but your audio subsystem
> cannot keep up the socket buffer non-empty that would avoid OBEX
> taking too much bandwidth, but again that is pretty strange as you
> should be written much more frequently to the AVDTP socket to keep the
> latency of the audio constant.

I agree that SO_PRIORITY is not the problem, but I don't think this 
can be fixed at the audio subsystem level either.

Regards,

--
Mat Martineau
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] Support for reserving bandwidth on L2CAP socket
  2012-07-31 16:58               ` Mat Martineau
@ 2012-08-01 17:26                 ` Luiz Augusto von Dentz
  2012-08-13  6:14                   ` Manoj Sharma
  0 siblings, 1 reply; 13+ messages in thread
From: Luiz Augusto von Dentz @ 2012-08-01 17:26 UTC (permalink / raw)
  To: Mat Martineau
  Cc: Manoj Sharma, Marcel Holtmann, linux-bluetooth, Anurag Gupta

Hi Mat,

On Tue, Jul 31, 2012 at 7:58 PM, Mat Martineau <mathewm@codeaurora.org> wrote:
>
> Manoj and Luiz -
>
>
> On Tue, 31 Jul 2012, Luiz Augusto von Dentz wrote:
>
>> Hi Manoj,
>>
>> On Tue, Jul 31, 2012 at 2:30 PM, Manoj Sharma <ursmanoj@gmail.com> wrote:
>>>
>>> Hi Luiz,
>>>
>>> On 7/30/12, Luiz Augusto von Dentz <luiz.dentz@gmail.com> wrote:
>>>>
>>>> Hi Manoj,
>>>>
>>>> On Mon, Jul 30, 2012 at 9:30 AM, Manoj Sharma <ursmanoj@gmail.com>
>>>> wrote:
>>>>>
>>>>> One problem which I have faced using SO_PRIORITY is explained below.
>>>>>
>>>>> Suppose we have 2 links A & B and link A has higher priority than link
>>>>> B. And outgoing data transfer is active on both links. Now if device
>>>>> on link A goes far, there would be lot of failures and number of
>>>>> re-transmissions would increase for link A. Consequently at any time
>>>>> host would have significant number of packets for link A, getting
>>>>> accumulated due to poor quality of link.But since link A packets have
>>>>> higher priority, link B packets would suffer infinitely as long as
>>>>> link A packet queue in host is non-empty. Thus link B protocols may
>>>>> fail due to timers expiring and finally disconnection at upper layers.
>>>>
>>>>
>>>> There is a mechanism to avoid starvation, also apparently you didn't
>>>> study the code since the priority is per L2CAP channel not per link so
>>>> we are able to prioritize per profile.
>>>>
>>> I would check how starvation is avoided. But for your information I
>>> did observe starvation practically. And I know that priority is per
>>> L2CAP. I mentioned links based on assumption that AVDTP and OBEX are
>>> connected with different devices. Hence priority would result into
>>> priority of connections in such case ;).
>>
>>
>> There is no such thing of prioritize a connection, the algorithm used
>> always check every channel of each connection and prioritize the
>> channel. Maybe you are confusing what some controllers do, the
>> controller has no idea what L2CAP channel has been configured it only
>> knows about the ACL connections.
>
>
> The current starvation avoidance algorithm works well as long as there is
> data queued for all of the L2CAP channels and the controller is sending
> packets regularly.  It does seem to break down when the controller queue
> gets stuck (see below).

Indeed, but that is not the host stack fault, in fact it less likely
that controller queue gonna get stuck with AVDTP than with OBEX since
the latter tend to push data as fast as it can, also for AVDTP it
should be possible to set a sensible flushable timeout. Maybe we could
make use of SO_SNDTIMEO to actually set the flushable timeout?

> I do see where a bad situation arises when the OBEX connection is stalled
> and only queued OBEX data is available to the host stack HCI scheduler at
> that instant.  In that case, the controller queue could be completely
> consumed by data for the stalled channel no matter what the priorities are.
> This could even happen when audio data is passed to the socket at exactly
> the right time.
>
> If you're using OBEX-over-L2CAP, this could be partially addressed by
> setting a flush timeout.  However, it would still be possible to fill the
> buffers with OBEX packets because non-flushable ERTM s-frames would
> accumulate in the controller buffer.
>
> For traditional OBEX-over-RFCOMM, an OBEX packet sizes that are smaller than
> the controller buffer could help.  This is a tradeoff against throughput.
> It could work to send smaller OBEX packets when an AVDTP stream is active,
> even if a larger OBEX MTU was negotiated.

Yep, in certain situations this may actually be unavoidable like in
link loss, although we might be able to have a very small link
supervision timeout.

> It would be a big help if Manoj could post kernel logs showing us how the
> HCI scheduler is actually behaving in the problem case.

Good point.

>
> While I'm convinced that a problem exists here, I think it can be addressed
> using existing interfaces instead of adding a new one.  For example, it may
> be reasonable to not fully utilize the controller buffer with data for just
> one ACL, or to use priority when looking at controller buffer utilization.
> Maybe an ACL could use all but one slot in the controller buffer, maybe it
> could only use half if there are multiple ACLs open.  I don't think it would
> throttle throughput unless the system was so heavily loaded that it couldn't
> respond to number-of-completed-packets in time at BR/EDR data rates, and in
> that case there are bigger problems.  It's pretty easy to test with
> hdev->acl_pkts set to different values to see if using less of the buffer
> impacts throughput.

Well the problem may exist but as I said credit based algorithm would
probably be overkill, in the other hand it maybe be useful to leave at
least one slot for each connection whenever possible, but iirc it does
impact the throughput specially on controller that does have big
buffers.

> Right now, one stalled ACL disrupts all data flow through the controller.
> And that seems like a problem worth solving.

I would go with link supervision timeout, specially in case of audio
if the ACL got stalled for more than 2 seconds there is probably a
problem and in most cases it is better to disconnect and route the
audio back to the speakers.

>
> I agree that SO_PRIORITY is not the problem, but I don't think this can be
> fixed at the audio subsystem level either.
>

Im not absolutely sure, but it has been quite some time I don't
experience such problems and I do use A2DP/AVDTP a lot, in fact last
week Ive setup both sink and source simultaneously (phone -> pc ->
headset) together with a file transfer and I didn't notice any
problems.

-- 
Luiz Augusto von Dentz

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] Support for reserving bandwidth on L2CAP socket
  2012-08-01 17:26                 ` Luiz Augusto von Dentz
@ 2012-08-13  6:14                   ` Manoj Sharma
  0 siblings, 0 replies; 13+ messages in thread
From: Manoj Sharma @ 2012-08-13  6:14 UTC (permalink / raw)
  To: Luiz Augusto von Dentz
  Cc: Mat Martineau, Marcel Holtmann, linux-bluetooth, Anurag Gupta

Hi Luiz & Mat,
Sorry for responding late.

On 8/1/12, Luiz Augusto von Dentz <luiz.dentz@gmail.com> wrote:
> Hi Mat,
>
> On Tue, Jul 31, 2012 at 7:58 PM, Mat Martineau <mathewm@codeaurora.org>
> wrote:
>>
>> Manoj and Luiz -
>>
>>
>> On Tue, 31 Jul 2012, Luiz Augusto von Dentz wrote:
>>
>>> Hi Manoj,
>>>
>>> On Tue, Jul 31, 2012 at 2:30 PM, Manoj Sharma <ursmanoj@gmail.com>
>>> wrote:
>>>>
>>>> Hi Luiz,
>>>>
>>>> On 7/30/12, Luiz Augusto von Dentz <luiz.dentz@gmail.com> wrote:
>>>>>
>>>>> Hi Manoj,
>>>>>
>>>>> On Mon, Jul 30, 2012 at 9:30 AM, Manoj Sharma <ursmanoj@gmail.com>
>>>>> wrote:
>>>>>>
>>>>>> One problem which I have faced using SO_PRIORITY is explained below.
>>>>>>
>>>>>> Suppose we have 2 links A & B and link A has higher priority than
>>>>>> link
>>>>>> B. And outgoing data transfer is active on both links. Now if device
>>>>>> on link A goes far, there would be lot of failures and number of
>>>>>> re-transmissions would increase for link A. Consequently at any time
>>>>>> host would have significant number of packets for link A, getting
>>>>>> accumulated due to poor quality of link.But since link A packets have
>>>>>> higher priority, link B packets would suffer infinitely as long as
>>>>>> link A packet queue in host is non-empty. Thus link B protocols may
>>>>>> fail due to timers expiring and finally disconnection at upper
>>>>>> layers.
>>>>>
>>>>>
>>>>> There is a mechanism to avoid starvation, also apparently you didn't
>>>>> study the code since the priority is per L2CAP channel not per link so
>>>>> we are able to prioritize per profile.
>>>>>
>>>> I would check how starvation is avoided. But for your information I
>>>> did observe starvation practically. And I know that priority is per
>>>> L2CAP. I mentioned links based on assumption that AVDTP and OBEX are
>>>> connected with different devices. Hence priority would result into
>>>> priority of connections in such case ;).
>>>
>>>
>>> There is no such thing of prioritize a connection, the algorithm used
>>> always check every channel of each connection and prioritize the
>>> channel. Maybe you are confusing what some controllers do, the
>>> controller has no idea what L2CAP channel has been configured it only
>>> knows about the ACL connections.
>>
>>
>> The current starvation avoidance algorithm works well as long as there is
>> data queued for all of the L2CAP channels and the controller is sending
>> packets regularly.  It does seem to break down when the controller queue
>> gets stuck (see below).
>
> Indeed, but that is not the host stack fault, in fact it less likely
> that controller queue gonna get stuck with AVDTP than with OBEX since
> the latter tend to push data as fast as it can, also for AVDTP it
> should be possible to set a sensible flushable timeout. Maybe we could
> make use of SO_SNDTIMEO to actually set the flushable timeout?
>
>> I do see where a bad situation arises when the OBEX connection is stalled
>> and only queued OBEX data is available to the host stack HCI scheduler at
>> that instant.  In that case, the controller queue could be completely
>> consumed by data for the stalled channel no matter what the priorities
>> are.
>> This could even happen when audio data is passed to the socket at exactly
>> the right time.
>>
>> If you're using OBEX-over-L2CAP, this could be partially addressed by
>> setting a flush timeout.  However, it would still be possible to fill the
>> buffers with OBEX packets because non-flushable ERTM s-frames would
>> accumulate in the controller buffer.
>>
>> For traditional OBEX-over-RFCOMM, an OBEX packet sizes that are smaller
>> than
>> the controller buffer could help.  This is a tradeoff against throughput.
>> It could work to send smaller OBEX packets when an AVDTP stream is
>> active,
>> even if a larger OBEX MTU was negotiated.
>
> Yep, in certain situations this may actually be unavoidable like in
> link loss, although we might be able to have a very small link
> supervision timeout.
>
>> It would be a big help if Manoj could post kernel logs showing us how the
>> HCI scheduler is actually behaving in the problem case.
>
> Good point.
>
>>
>> While I'm convinced that a problem exists here, I think it can be
>> addressed
>> using existing interfaces instead of adding a new one.  For example, it
>> may
>> be reasonable to not fully utilize the controller buffer with data for
>> just
>> one ACL, or to use priority when looking at controller buffer
>> utilization.
>> Maybe an ACL could use all but one slot in the controller buffer, maybe
>> it
>> could only use half if there are multiple ACLs open.  I don't think it
>> would
>> throttle throughput unless the system was so heavily loaded that it
>> couldn't
>> respond to number-of-completed-packets in time at BR/EDR data rates, and
>> in
>> that case there are bigger problems.  It's pretty easy to test with
>> hdev->acl_pkts set to different values to see if using less of the buffer
>> impacts throughput.
>
> Well the problem may exist but as I said credit based algorithm would
> probably be overkill, in the other hand it maybe be useful to leave at
> least one slot for each connection whenever possible, but iirc it does
> impact the throughput specially on controller that does have big
> buffers.
>
>> Right now, one stalled ACL disrupts all data flow through the controller.
>> And that seems like a problem worth solving.
>
> I would go with link supervision timeout, specially in case of audio
> if the ACL got stalled for more than 2 seconds there is probably a
> problem and in most cases it is better to disconnect and route the
> audio back to the speakers.
>
This point is my whole concern. In the situation I faced, stalled ACL
(OBEX) was causing disruption to the flow of A2DP traffic, here even
high priority of AVDTP channel doesn't work once OBEX ACL has taken
all data credits.
Therefore we do need to devise a mechanism where one stalled ACL does
not disrupt other ACLs.
As per my understanding if there are any delays caused by stalled ACL
on others ACLs carrying non-AVDTP data, such delays are not quickly
visible to end user. E.g. in case of FTP user may feel that data
transfer is slow, however it may be understood to him that since he is
using two devices at a time, it is ok. But if the stalled ACL is
impacting other ACL carrying AVDTP data, delays caused would be
immediately visible to end-user in form of glitches in music
streaming.
Hence in my opinion if we simply provide a mechanism to avoid AVDTP
being impacted by stalled ACLs, we are done. I gave this solution by
providing L2CAP socket option to reserve a credit for AVDTP channel.
First please comment if reserving a credit for AVDTP channel is a bad
solution and if we can devise a better solution for seamless streaming
of AVDTP.
Of course we can find better ways to facilitate reservation for AVDTP
channels i.e. other than socket option. In this regard also,
suggestion or even direct solution is welcome.

>>
>> I agree that SO_PRIORITY is not the problem, but I don't think this can
>> be
>> fixed at the audio subsystem level either.
>>
>
> Im not absolutely sure, but it has been quite some time I don't
> experience such problems and I do use A2DP/AVDTP a lot, in fact last
> week Ive setup both sink and source simultaneously (phone -> pc ->
> headset) together with a file transfer and I didn't notice any
> problems.
>
> --
> Luiz Augusto von Dentz
>
Best regards,
Manoj

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2012-08-13  6:14 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-07-24  9:54 [PATCH 0/2] Support for reserving bandwidth on L2CAP socket Manoj Kumar Sharma
2012-07-24  9:54 ` [PATCH 1/2] Bluetooth: Usage of HCI channels in L2CAP Manoj Kumar Sharma
2012-07-24  9:55   ` [PATCH 2/2] Bluetooth: L2CAP socket option to reserve bandwidth Manoj Kumar Sharma
2012-07-24 13:53 ` [PATCH 0/2] Support for reserving bandwidth on L2CAP socket Marcel Holtmann
2012-07-27  5:14   ` Manoj Sharma
2012-07-27 14:37     ` Marcel Holtmann
2012-07-30  6:30       ` Manoj Sharma
2012-07-30 16:11         ` Luiz Augusto von Dentz
2012-07-31 11:30           ` Manoj Sharma
2012-07-31 14:10             ` Luiz Augusto von Dentz
2012-07-31 16:58               ` Mat Martineau
2012-08-01 17:26                 ` Luiz Augusto von Dentz
2012-08-13  6:14                   ` Manoj Sharma

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.