All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun
@ 2018-02-05 14:29 Max Gurtovoy
  2018-02-05 14:29   ` Max Gurtovoy
       [not found] ` <1517840992-29813-1-git-send-email-maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  0 siblings, 2 replies; 14+ messages in thread
From: Max Gurtovoy @ 2018-02-05 14:29 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA, jgg-VPRAkNaXOzVWk0Htik3J/w,
	sagi-NQWnxTmZq1alnMjI0IkVqw
  Cc: vladimirk-VPRAkNaXOzVWk0Htik3J/w, Max Gurtovoy

Currently the async EQ has 256 entries only. It might not be big enough
for the SW to handle all the needed pending events. For example, in case
of many QPs (let's say 1024) connected to a SRQ created using NVMeOF target
and the target goes down, the FW will raise 1024 "last WQE reached" events
and may cause EQ overrun. Increase the EQ to more reasonable size, that beyond
it the FW should be able to delay the event and raise it later on using internal
backpressure mechanism.

Signed-off-by: Max Gurtovoy <maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
---
 drivers/net/ethernet/mellanox/mlx5/core/eq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
index e7e7cef..9ce4add 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
@@ -51,7 +51,7 @@ enum {
 
 enum {
 	MLX5_NUM_SPARE_EQE	= 0x80,
-	MLX5_NUM_ASYNC_EQE	= 0x100,
+	MLX5_NUM_ASYNC_EQE	= 0x1000,
 	MLX5_NUM_CMD_EQE	= 32,
 	MLX5_NUM_PF_DRAIN	= 64,
 };
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/2] net/mlx5: fix affinity mask for completion vectors
  2018-02-05 14:29 [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun Max Gurtovoy
@ 2018-02-05 14:29   ` Max Gurtovoy
       [not found] ` <1517840992-29813-1-git-send-email-maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  1 sibling, 0 replies; 14+ messages in thread
From: Max Gurtovoy @ 2018-02-05 14:29 UTC (permalink / raw)
  To: linux-rdma, jgg, sagi; +Cc: vladimirk, Max Gurtovoy, stable, Logan Gunthorpe

Add an offset to ignore private mlx5 vectors.

Fixes: 05e0cc84e00c ("net/mlx5: Fix get vector affinity helper function")
Cc: <stable@vger.kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 include/linux/mlx5/driver.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
index a061042..9bab9d3 100644
--- a/include/linux/mlx5/driver.h
+++ b/include/linux/mlx5/driver.h
@@ -1229,6 +1229,7 @@ enum {
 	MLX5_TRIGGERED_CMD_COMP = (u64)1 << 32,
 };
 
+/* Returns the affinity mask of a completion vector */
 static inline const struct cpumask *
 mlx5_get_vector_affinity(struct mlx5_core_dev *dev, int vector)
 {
@@ -1238,7 +1239,7 @@ enum {
 	int eqn;
 	int err;
 
-	err = mlx5_vector2eqn(dev, vector, &eqn, &irq);
+	err = mlx5_vector2eqn(dev, MLX5_EQ_VEC_COMP_BASE + vector, &eqn, &irq);
 	if (err)
 		return NULL;
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/2] net/mlx5: fix affinity mask for completion vectors
@ 2018-02-05 14:29   ` Max Gurtovoy
  0 siblings, 0 replies; 14+ messages in thread
From: Max Gurtovoy @ 2018-02-05 14:29 UTC (permalink / raw)
  To: linux-rdma, jgg, sagi; +Cc: vladimirk, Max Gurtovoy, stable, Logan Gunthorpe

Add an offset to ignore private mlx5 vectors.

Fixes: 05e0cc84e00c ("net/mlx5: Fix get vector affinity helper function")
Cc: <stable@vger.kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 include/linux/mlx5/driver.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
index a061042..9bab9d3 100644
--- a/include/linux/mlx5/driver.h
+++ b/include/linux/mlx5/driver.h
@@ -1229,6 +1229,7 @@ enum {
 	MLX5_TRIGGERED_CMD_COMP = (u64)1 << 32,
 };
 
+/* Returns the affinity mask of a completion vector */
 static inline const struct cpumask *
 mlx5_get_vector_affinity(struct mlx5_core_dev *dev, int vector)
 {
@@ -1238,7 +1239,7 @@ enum {
 	int eqn;
 	int err;
 
-	err = mlx5_vector2eqn(dev, vector, &eqn, &irq);
+	err = mlx5_vector2eqn(dev, MLX5_EQ_VEC_COMP_BASE + vector, &eqn, &irq);
 	if (err)
 		return NULL;
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/2] net/mlx5: fix affinity mask for completion vectors
  2018-02-05 14:29   ` Max Gurtovoy
  (?)
@ 2018-02-05 14:36   ` Sagi Grimberg
  -1 siblings, 0 replies; 14+ messages in thread
From: Sagi Grimberg @ 2018-02-05 14:36 UTC (permalink / raw)
  To: Max Gurtovoy, linux-rdma, jgg; +Cc: vladimirk, stable, Logan Gunthorpe

Already sent a patch, you can ignore this one.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun
       [not found] ` <1517840992-29813-1-git-send-email-maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2018-02-05 15:28   ` Sagi Grimberg
  2018-02-05 16:02   ` Doug Ledford
  2018-02-05 18:09   ` Jason Gunthorpe
  2 siblings, 0 replies; 14+ messages in thread
From: Sagi Grimberg @ 2018-02-05 15:28 UTC (permalink / raw)
  To: Max Gurtovoy, linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	jgg-VPRAkNaXOzVWk0Htik3J/w
  Cc: vladimirk-VPRAkNaXOzVWk0Htik3J/w

Reviewed-by: Sagi Grimberg <sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun
       [not found] ` <1517840992-29813-1-git-send-email-maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2018-02-05 15:28   ` [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun Sagi Grimberg
@ 2018-02-05 16:02   ` Doug Ledford
       [not found]     ` <1517846530.3936.71.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2018-02-05 18:09   ` Jason Gunthorpe
  2 siblings, 1 reply; 14+ messages in thread
From: Doug Ledford @ 2018-02-05 16:02 UTC (permalink / raw)
  To: Max Gurtovoy, linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	jgg-VPRAkNaXOzVWk0Htik3J/w, sagi-NQWnxTmZq1alnMjI0IkVqw
  Cc: vladimirk-VPRAkNaXOzVWk0Htik3J/w

[-- Attachment #1: Type: text/plain, Size: 1100 bytes --]

On Mon, 2018-02-05 at 16:29 +0200, Max Gurtovoy wrote:
> Currently the async EQ has 256 entries only. It might not be big enough
> for the SW to handle all the needed pending events. For example, in case
> of many QPs (let's say 1024) connected to a SRQ created using NVMeOF target
> and the target goes down, the FW will raise 1024 "last WQE reached" events
> and may cause EQ overrun. Increase the EQ to more reasonable size, that beyond
> it the FW should be able to delay the event and raise it later on using internal
> backpressure mechanism.
> 
> Signed-off-by: Max Gurtovoy <maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

Thanks, applied.  But if this gets me in trouble with DaveM for sending
a patch to drivers/net, I'm gonna tell him it's because you sent it here
and excluded netdev@ and that it was part of a series related to an RDMA
issue that I took it, all while giving you the evil eye ;-)

-- 
Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
    GPG KeyID: B826A3330E572FDD
    Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun
       [not found]     ` <1517846530.3936.71.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2018-02-05 16:10       ` Leon Romanovsky
  0 siblings, 0 replies; 14+ messages in thread
From: Leon Romanovsky @ 2018-02-05 16:10 UTC (permalink / raw)
  To: Doug Ledford
  Cc: Max Gurtovoy, linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	jgg-VPRAkNaXOzVWk0Htik3J/w, sagi-NQWnxTmZq1alnMjI0IkVqw,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w

[-- Attachment #1: Type: text/plain, Size: 1256 bytes --]

On Mon, Feb 05, 2018 at 11:02:10AM -0500, Doug Ledford wrote:
> On Mon, 2018-02-05 at 16:29 +0200, Max Gurtovoy wrote:
> > Currently the async EQ has 256 entries only. It might not be big enough
> > for the SW to handle all the needed pending events. For example, in case
> > of many QPs (let's say 1024) connected to a SRQ created using NVMeOF target
> > and the target goes down, the FW will raise 1024 "last WQE reached" events
> > and may cause EQ overrun. Increase the EQ to more reasonable size, that beyond
> > it the FW should be able to delay the event and raise it later on using internal
> > backpressure mechanism.
> >
> > Signed-off-by: Max Gurtovoy <maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
>
> Thanks, applied.  But if this gets me in trouble with DaveM for sending
> a patch to drivers/net, I'm gonna tell him it's because you sent it here
> and excluded netdev@ and that it was part of a series related to an RDMA
> issue that I took it, all while giving you the evil eye ;-)

You can add that neither me nor Saeed didn't test it for any conflicts.

>
> --
> Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>     GPG KeyID: B826A3330E572FDD
>     Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun
       [not found] ` <1517840992-29813-1-git-send-email-maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2018-02-05 15:28   ` [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun Sagi Grimberg
  2018-02-05 16:02   ` Doug Ledford
@ 2018-02-05 18:09   ` Jason Gunthorpe
       [not found]     ` <20180205180904.GB11446-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2 siblings, 1 reply; 14+ messages in thread
From: Jason Gunthorpe @ 2018-02-05 18:09 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, sagi-NQWnxTmZq1alnMjI0IkVqw,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w

On Mon, Feb 05, 2018 at 04:29:51PM +0200, Max Gurtovoy wrote:
> Currently the async EQ has 256 entries only. It might not be big enough
> for the SW to handle all the needed pending events. For example, in case
> of many QPs (let's say 1024) connected to a SRQ created using NVMeOF target
> and the target goes down, the FW will raise 1024 "last WQE reached" events
> and may cause EQ overrun. Increase the EQ to more reasonable size, that beyond
> it the FW should be able to delay the event and raise it later on using internal
> backpressure mechanism.

If the firmware has an internal backpressure meachanism then why
would we get a EQ overrun?

Do we need to block adding too many QPs to a SRQ as well or something
like that?

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun
       [not found]     ` <20180205180904.GB11446-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2018-02-05 23:11       ` Max Gurtovoy
       [not found]         ` <d572b570-0844-e441-19a0-e6804557fb58-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 14+ messages in thread
From: Max Gurtovoy @ 2018-02-05 23:11 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, sagi-NQWnxTmZq1alnMjI0IkVqw,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w



On 2/5/2018 8:09 PM, Jason Gunthorpe wrote:
> On Mon, Feb 05, 2018 at 04:29:51PM +0200, Max Gurtovoy wrote:
>> Currently the async EQ has 256 entries only. It might not be big enough
>> for the SW to handle all the needed pending events. For example, in case
>> of many QPs (let's say 1024) connected to a SRQ created using NVMeOF target
>> and the target goes down, the FW will raise 1024 "last WQE reached" events
>> and may cause EQ overrun. Increase the EQ to more reasonable size, that beyond
>> it the FW should be able to delay the event and raise it later on using internal
>> backpressure mechanism.
> 
> If the firmware has an internal backpressure meachanism then why
> would we get a EQ overrun?

FW backpressure mechanism is WIP, that's why we get the overrun.
After consulting with FW team, we conclude that 256 EQ depth is small.
Do you think it's reasonable to allocate 4k entries (256KB of contig 
memory) for async EQ ?

> 
> Do we need to block adding too many QPs to a SRQ as well or something
> like that?

Hard to say. In the storage world, this may lead to a situation that 
initiator X has priority over initiator Y on without any good reason 
(only because X was served before Y)..

> 
> Jason
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun
       [not found]         ` <d572b570-0844-e441-19a0-e6804557fb58-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2018-02-05 23:16           ` Jason Gunthorpe
       [not found]             ` <20180205231617.GQ11446-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 14+ messages in thread
From: Jason Gunthorpe @ 2018-02-05 23:16 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, sagi-NQWnxTmZq1alnMjI0IkVqw,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w

On Tue, Feb 06, 2018 at 01:11:41AM +0200, Max Gurtovoy wrote:
> 
> 
> On 2/5/2018 8:09 PM, Jason Gunthorpe wrote:
> >On Mon, Feb 05, 2018 at 04:29:51PM +0200, Max Gurtovoy wrote:
> >>Currently the async EQ has 256 entries only. It might not be big enough
> >>for the SW to handle all the needed pending events. For example, in case
> >>of many QPs (let's say 1024) connected to a SRQ created using NVMeOF target
> >>and the target goes down, the FW will raise 1024 "last WQE reached" events
> >>and may cause EQ overrun. Increase the EQ to more reasonable size, that beyond
> >>it the FW should be able to delay the event and raise it later on using internal
> >>backpressure mechanism.
> >
> >If the firmware has an internal backpressure meachanism then why
> >would we get a EQ overrun?
> 
> FW backpressure mechanism is WIP, that's why we get the overrun.

Ah, so current HW blows up if EQ is overrun and that can actually be
triggered by ULPs? Yuk

> After consulting with FW team, we conclude that 256 EQ depth is small.
> Do you think it's reasonable to allocate 4k entries (256KB of contig memory)
> for async EQ ?

No idea, ask Saeed?

> >Do we need to block adding too many QPs to a SRQ as well or something
> >like that?
> 
> Hard to say. In the storage world, this may lead to a situation that
> initiator X has priority over initiator Y on without any good reason (only
> because X was served before Y)..

Well, correctness comes first, so if the device does have to protect
itself from rouge ULPS.. If that means enforcing a goofy limit, then
so be it :(

Presumably someday fixed firmware will remove the limitation?

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun
       [not found]             ` <20180205231617.GQ11446-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2018-02-08  5:45               ` Saeed Mahameed
       [not found]                 ` <CALzJLG9QQ_B4xj-E+HtpxUWZhFU9eWeqSNHSaQjf5HH=9TPXTg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 14+ messages in thread
From: Saeed Mahameed @ 2018-02-08  5:45 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Max Gurtovoy, linux-rdma-u79uwXL29TY76Z2rM5mHXA, Sagi Grimberg,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w, Saeed Mahameed

On Mon, Feb 5, 2018 at 3:16 PM, Jason Gunthorpe <jgg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org> wrote:
> On Tue, Feb 06, 2018 at 01:11:41AM +0200, Max Gurtovoy wrote:
>>
>>
>> On 2/5/2018 8:09 PM, Jason Gunthorpe wrote:
>> >On Mon, Feb 05, 2018 at 04:29:51PM +0200, Max Gurtovoy wrote:
>> >>Currently the async EQ has 256 entries only. It might not be big enough
>> >>for the SW to handle all the needed pending events. For example, in case
>> >>of many QPs (let's say 1024) connected to a SRQ created using NVMeOF target
>> >>and the target goes down, the FW will raise 1024 "last WQE reached" events
>> >>and may cause EQ overrun. Increase the EQ to more reasonable size, that beyond
>> >>it the FW should be able to delay the event and raise it later on using internal
>> >>backpressure mechanism.
>> >
>> >If the firmware has an internal backpressure meachanism then why
>> >would we get a EQ overrun?
>>
>> FW backpressure mechanism is WIP, that's why we get the overrun.
>
> Ah, so current HW blows up if EQ is overrun and that can actually be
> triggered by ULPs? Yuk
>
>> After consulting with FW team, we conclude that 256 EQ depth is small.
>> Do you think it's reasonable to allocate 4k entries (256KB of contig memory)
>> for async EQ ?
>
> No idea, ask Saeed?

Thank you Jason for raising those concerns, your concerns are in place
and the whole issue
already being discussed internally.

Max, you are already cc'ed to my emails regarding this issue since last week,
next time I would expect you to roll back such patch.

I see, that this patch is already on its way to linus, with no proper
mlx5 maintainer sign-off, nice.

There is a well defined flow we have internally for each patch to pass review,
regression and merge tests, why did you go behind our backs with this patch ?

>
>> >Do we need to block adding too many QPs to a SRQ as well or something
>> >like that?
>>
>> Hard to say. In the storage world, this may lead to a situation that
>> initiator X has priority over initiator Y on without any good reason (only
>> because X was served before Y)..
>
> Well, correctness comes first, so if the device does have to protect
> itself from rouge ULPS.. If that means enforcing a goofy limit, then
> so be it :(
>
> Presumably someday fixed firmware will remove the limitation?
>
> Jason
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun
       [not found]                 ` <CALzJLG9QQ_B4xj-E+HtpxUWZhFU9eWeqSNHSaQjf5HH=9TPXTg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2018-02-08 16:26                   ` Jason Gunthorpe
       [not found]                     ` <20180208162605.GK9051-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 14+ messages in thread
From: Jason Gunthorpe @ 2018-02-08 16:26 UTC (permalink / raw)
  To: Saeed Mahameed, Doug Ledford
  Cc: Max Gurtovoy, linux-rdma-u79uwXL29TY76Z2rM5mHXA, Sagi Grimberg,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w, Saeed Mahameed

On Wed, Feb 07, 2018 at 09:45:52PM -0800, Saeed Mahameed wrote:

> I see, that this patch is already on its way to linus, with no proper
> mlx5 maintainer sign-off, nice.

Doug and I can be more careful to not accept patches without
maintainer ack's - but this means I will want to see Acks from you no
matter who sends the patch (including patches sent by Leon).

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun
       [not found]                     ` <20180208162605.GK9051-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2018-02-08 16:39                       ` Leon Romanovsky
  2018-02-08 18:58                       ` Saeed Mahameed
  1 sibling, 0 replies; 14+ messages in thread
From: Leon Romanovsky @ 2018-02-08 16:39 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Saeed Mahameed, Doug Ledford, Max Gurtovoy,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Sagi Grimberg,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w, Saeed Mahameed

[-- Attachment #1: Type: text/plain, Size: 1417 bytes --]

On Thu, Feb 08, 2018 at 09:26:05AM -0700, Jason Gunthorpe wrote:
> On Wed, Feb 07, 2018 at 09:45:52PM -0800, Saeed Mahameed wrote:
>
> > I see, that this patch is already on its way to linus, with no proper
> > mlx5 maintainer sign-off, nice.
>
> Doug and I can be more careful to not accept patches without
> maintainer ack's - but this means I will want to see Acks from you no
> matter who sends the patch (including patches sent by Leon).

Please don't invent the wheel, mlx5 is my responsibility too,
I'm just successfully avoid dealing with netdev patches.

 8897 MELLANOX MLX5 core VPI driver
 8898 M:      Saeed Mahameed <saeedm-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
 8899 M:      Matan Barak <matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
 8900 M:      Leon Romanovsky <leonro-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
 8901 L:      netdev-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
 8902 L:      linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
 8903 W:      http://www.mellanox.com
 8904 Q:      http://patchwork.ozlabs.org/project/netdev/list/
 8905 S:      Supported
 8906 F:      drivers/net/ethernet/mellanox/mlx5/core/
 8907 F:      include/linux/mlx5/

Thanks

>
> Jason
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun
       [not found]                     ` <20180208162605.GK9051-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2018-02-08 16:39                       ` Leon Romanovsky
@ 2018-02-08 18:58                       ` Saeed Mahameed
  1 sibling, 0 replies; 14+ messages in thread
From: Saeed Mahameed @ 2018-02-08 18:58 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Doug Ledford, Max Gurtovoy, linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	Sagi Grimberg, vladimirk-VPRAkNaXOzVWk0Htik3J/w, Saeed Mahameed

On Thu, Feb 8, 2018 at 8:26 AM, Jason Gunthorpe <jgg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org> wrote:
> On Wed, Feb 07, 2018 at 09:45:52PM -0800, Saeed Mahameed wrote:
>
>> I see, that this patch is already on its way to linus, with no proper
>> mlx5 maintainer sign-off, nice.
>
> Doug and I can be more careful to not accept patches without
> maintainer ack's - but this means I will want to see Acks from you no
> matter who sends the patch (including patches sent by Leon).
>

Thanks for understanding, Leon and I almost always in sync,
we don't submit patches to each other tree (especially pure patches)
without notifying the other first.

> Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2018-02-08 18:58 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-05 14:29 [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun Max Gurtovoy
2018-02-05 14:29 ` [PATCH 2/2] net/mlx5: fix affinity mask for completion vectors Max Gurtovoy
2018-02-05 14:29   ` Max Gurtovoy
2018-02-05 14:36   ` Sagi Grimberg
     [not found] ` <1517840992-29813-1-git-send-email-maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2018-02-05 15:28   ` [PATCH 1/2] net/mlx5: increase async EQ to avoid EQ overrun Sagi Grimberg
2018-02-05 16:02   ` Doug Ledford
     [not found]     ` <1517846530.3936.71.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-02-05 16:10       ` Leon Romanovsky
2018-02-05 18:09   ` Jason Gunthorpe
     [not found]     ` <20180205180904.GB11446-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2018-02-05 23:11       ` Max Gurtovoy
     [not found]         ` <d572b570-0844-e441-19a0-e6804557fb58-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2018-02-05 23:16           ` Jason Gunthorpe
     [not found]             ` <20180205231617.GQ11446-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2018-02-08  5:45               ` Saeed Mahameed
     [not found]                 ` <CALzJLG9QQ_B4xj-E+HtpxUWZhFU9eWeqSNHSaQjf5HH=9TPXTg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-02-08 16:26                   ` Jason Gunthorpe
     [not found]                     ` <20180208162605.GK9051-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2018-02-08 16:39                       ` Leon Romanovsky
2018-02-08 18:58                       ` Saeed Mahameed

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.