linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [Bug 214523] New: RDMA Mellanox RoCE drivers are unresponsive to ARP updates during a reconnect
@ 2021-09-24 15:34 bugzilla-daemon
  2021-09-26  8:02 ` Leon Romanovsky
  0 siblings, 1 reply; 10+ messages in thread
From: bugzilla-daemon @ 2021-09-24 15:34 UTC (permalink / raw)
  To: linux-rdma

https://bugzilla.kernel.org/show_bug.cgi?id=214523

            Bug ID: 214523
           Summary: RDMA Mellanox RoCE drivers are unresponsive to ARP
                    updates during a reconnect
           Product: Drivers
           Version: 2.5
    Kernel Version: 5.14
          Hardware: All
                OS: Linux
              Tree: Mainline
            Status: NEW
          Severity: normal
          Priority: P1
         Component: Infiniband/RDMA
          Assignee: drivers_infiniband-rdma@kernel-bugs.osdl.org
          Reporter: kolga@netapp.com
        Regression: No

RoCE RDMA connection uses CMA protocol to establish an RDMA connection. During
the setup the code uses hard coded timeout/retry values. These values are used
for when Connect Request is not being answered to to re-try the request. During
the re-try attempts the ARP updates of the destination server are ignored.
Current timeout values lead to 4+minutes long attempt at connecting to a server
that no longer owns the IP since the ARP update happens. 

The ask is to make the timeout/retry values configurable via procfs or sysfs.
This will allow for environments that use RoCE to reduce the timeouts to a more
reasonable values and be able to react to the ARP updates faster. Other CMA
users (eg IB or others) can continue to use existing values.

The problem exist in all kernel versions but bugzilla is filed for 5.14 kernel.

The use case is (RoCE-based) NFSoRDMA where a server went down and another
server was brought up in its place. RDMA layer introduces 4+ minutes in being
able to re-establish an RDMA connection and let IO resume, due to inability to
react to the ARP update.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bug 214523] New: RDMA Mellanox RoCE drivers are unresponsive to ARP updates during a reconnect
  2021-09-24 15:34 [Bug 214523] New: RDMA Mellanox RoCE drivers are unresponsive to ARP updates during a reconnect bugzilla-daemon
@ 2021-09-26  8:02 ` Leon Romanovsky
  2021-09-26 17:36   ` Chuck Lever III
  0 siblings, 1 reply; 10+ messages in thread
From: Leon Romanovsky @ 2021-09-26  8:02 UTC (permalink / raw)
  To: bugzilla-daemon; +Cc: linux-rdma, Chuck Lever

On Fri, Sep 24, 2021 at 03:34:32PM +0000, bugzilla-daemon@bugzilla.kernel.org wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=214523
> 
>             Bug ID: 214523
>            Summary: RDMA Mellanox RoCE drivers are unresponsive to ARP
>                     updates during a reconnect
>            Product: Drivers
>            Version: 2.5
>     Kernel Version: 5.14
>           Hardware: All
>                 OS: Linux
>               Tree: Mainline
>             Status: NEW
>           Severity: normal
>           Priority: P1
>          Component: Infiniband/RDMA
>           Assignee: drivers_infiniband-rdma@kernel-bugs.osdl.org
>           Reporter: kolga@netapp.com
>         Regression: No
> 
> RoCE RDMA connection uses CMA protocol to establish an RDMA connection. During
> the setup the code uses hard coded timeout/retry values. These values are used
> for when Connect Request is not being answered to to re-try the request. During
> the re-try attempts the ARP updates of the destination server are ignored.
> Current timeout values lead to 4+minutes long attempt at connecting to a server
> that no longer owns the IP since the ARP update happens. 
> 
> The ask is to make the timeout/retry values configurable via procfs or sysfs.
> This will allow for environments that use RoCE to reduce the timeouts to a more
> reasonable values and be able to react to the ARP updates faster. Other CMA
> users (eg IB or others) can continue to use existing values.
> 
> The problem exist in all kernel versions but bugzilla is filed for 5.14 kernel.
> 
> The use case is (RoCE-based) NFSoRDMA where a server went down and another
> server was brought up in its place. RDMA layer introduces 4+ minutes in being
> able to re-establish an RDMA connection and let IO resume, due to inability to
> react to the ARP update.

RDMA-CM has many different timeouts, so I hope that my answer is for the
right timeout.

We probably need to extend rdma_connect() to receive remote_cm_response_timeout
value, so NFSoRDMA will set it to whatever value its appropriate.

The timewait will be calculated based it in ib_send_cm_req().

Thanks

> 
> -- 
> You may reply to this email to add a comment.
> 
> You are receiving this mail because:
> You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bug 214523] New: RDMA Mellanox RoCE drivers are unresponsive to ARP updates during a reconnect
  2021-09-26  8:02 ` Leon Romanovsky
@ 2021-09-26 17:36   ` Chuck Lever III
  2021-09-27 12:09     ` Leon Romanovsky
  0 siblings, 1 reply; 10+ messages in thread
From: Chuck Lever III @ 2021-09-26 17:36 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: bugzilla-daemon, linux-rdma

Hi Leon-

Thanks for the suggestion! More below.

> On Sep 26, 2021, at 4:02 AM, Leon Romanovsky <leon@kernel.org> wrote:
> 
> On Fri, Sep 24, 2021 at 03:34:32PM +0000, bugzilla-daemon@bugzilla.kernel.org wrote:
>> https://bugzilla.kernel.org/show_bug.cgi?id=214523
>> 
>>            Bug ID: 214523
>>           Summary: RDMA Mellanox RoCE drivers are unresponsive to ARP
>>                    updates during a reconnect
>>           Product: Drivers
>>           Version: 2.5
>>    Kernel Version: 5.14
>>          Hardware: All
>>                OS: Linux
>>              Tree: Mainline
>>            Status: NEW
>>          Severity: normal
>>          Priority: P1
>>         Component: Infiniband/RDMA
>>          Assignee: drivers_infiniband-rdma@kernel-bugs.osdl.org
>>          Reporter: kolga@netapp.com
>>        Regression: No
>> 
>> RoCE RDMA connection uses CMA protocol to establish an RDMA connection. During
>> the setup the code uses hard coded timeout/retry values. These values are used
>> for when Connect Request is not being answered to to re-try the request. During
>> the re-try attempts the ARP updates of the destination server are ignored.
>> Current timeout values lead to 4+minutes long attempt at connecting to a server
>> that no longer owns the IP since the ARP update happens. 
>> 
>> The ask is to make the timeout/retry values configurable via procfs or sysfs.
>> This will allow for environments that use RoCE to reduce the timeouts to a more
>> reasonable values and be able to react to the ARP updates faster. Other CMA
>> users (eg IB or others) can continue to use existing values.

I would rather not add a user-facing tunable. The fabric should
be better at detecting addressing changes within a reasonable
time. It would be helpful to provide a history of why the ARP
timeout is so lax -- do certain ULPs rely on it being long?


>> The problem exist in all kernel versions but bugzilla is filed for 5.14 kernel.
>> 
>> The use case is (RoCE-based) NFSoRDMA where a server went down and another
>> server was brought up in its place. RDMA layer introduces 4+ minutes in being
>> able to re-establish an RDMA connection and let IO resume, due to inability to
>> react to the ARP update.
> 
> RDMA-CM has many different timeouts, so I hope that my answer is for the
> right timeout.
> 
> We probably need to extend rdma_connect() to receive remote_cm_response_timeout
> value, so NFSoRDMA will set it to whatever value its appropriate.
> 
> The timewait will be calculated based it in ib_send_cm_req().

I hope a mechanism can be found that behaves the same or nearly the
same way for all RDMA fabrics.

For those who are not NFS-savvy:

Simple NFS server failover is typically implemented with a heartbeat
between two similar platforms that both access the same backend
storage. When one platform fails, the other detects it and takes over
the failing platform's IP address. Clients detect connection loss
with the failing platform, and upon reconnection to that IP address
are transparently directed to the other platform.

NFS server vendors have tried to extend this behavior to RDMA fabrics,
with varying degrees of success.

In addition to enforcing availability SLAs, the time it takes to
re-establish a working connection is critical for NFSv4 because each
client maintains a lease to prevent the server from purging open and
lock state. If the reconnect takes too long, the client's lease is
jeopardized because other clients can then access files that client
might still have locked or open.


--
Chuck Lever




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bug 214523] New: RDMA Mellanox RoCE drivers are unresponsive to ARP updates during a reconnect
  2021-09-26 17:36   ` Chuck Lever III
@ 2021-09-27 12:09     ` Leon Romanovsky
  2021-09-27 12:24       ` Jason Gunthorpe
  2021-09-27 16:14       ` Chuck Lever III
  0 siblings, 2 replies; 10+ messages in thread
From: Leon Romanovsky @ 2021-09-27 12:09 UTC (permalink / raw)
  To: Chuck Lever III; +Cc: bugzilla-daemon, linux-rdma

On Sun, Sep 26, 2021 at 05:36:01PM +0000, Chuck Lever III wrote:
> Hi Leon-
> 
> Thanks for the suggestion! More below.
> 
> > On Sep 26, 2021, at 4:02 AM, Leon Romanovsky <leon@kernel.org> wrote:
> > 
> > On Fri, Sep 24, 2021 at 03:34:32PM +0000, bugzilla-daemon@bugzilla.kernel.org wrote:
> >> https://bugzilla.kernel.org/show_bug.cgi?id=214523
> >> 
> >>            Bug ID: 214523
> >>           Summary: RDMA Mellanox RoCE drivers are unresponsive to ARP
> >>                    updates during a reconnect
> >>           Product: Drivers
> >>           Version: 2.5
> >>    Kernel Version: 5.14
> >>          Hardware: All
> >>                OS: Linux
> >>              Tree: Mainline
> >>            Status: NEW
> >>          Severity: normal
> >>          Priority: P1
> >>         Component: Infiniband/RDMA
> >>          Assignee: drivers_infiniband-rdma@kernel-bugs.osdl.org
> >>          Reporter: kolga@netapp.com
> >>        Regression: No
> >> 
> >> RoCE RDMA connection uses CMA protocol to establish an RDMA connection. During
> >> the setup the code uses hard coded timeout/retry values. These values are used
> >> for when Connect Request is not being answered to to re-try the request. During
> >> the re-try attempts the ARP updates of the destination server are ignored.
> >> Current timeout values lead to 4+minutes long attempt at connecting to a server
> >> that no longer owns the IP since the ARP update happens. 
> >> 
> >> The ask is to make the timeout/retry values configurable via procfs or sysfs.
> >> This will allow for environments that use RoCE to reduce the timeouts to a more
> >> reasonable values and be able to react to the ARP updates faster. Other CMA
> >> users (eg IB or others) can continue to use existing values.
> 
> I would rather not add a user-facing tunable. The fabric should
> be better at detecting addressing changes within a reasonable
> time. It would be helpful to provide a history of why the ARP
> timeout is so lax -- do certain ULPs rely on it being long?

I don't know about ULPs and ARPs, but how to calculate TimeWait is
described in the spec.

Regarding tunable, I agree. Because it needs to be per-connection, most
likely not many people in the world will success to configure it properly.

> 
> 
> >> The problem exist in all kernel versions but bugzilla is filed for 5.14 kernel.
> >> 
> >> The use case is (RoCE-based) NFSoRDMA where a server went down and another
> >> server was brought up in its place. RDMA layer introduces 4+ minutes in being
> >> able to re-establish an RDMA connection and let IO resume, due to inability to
> >> react to the ARP update.
> > 
> > RDMA-CM has many different timeouts, so I hope that my answer is for the
> > right timeout.
> > 
> > We probably need to extend rdma_connect() to receive remote_cm_response_timeout
> > value, so NFSoRDMA will set it to whatever value its appropriate.
> > 
> > The timewait will be calculated based it in ib_send_cm_req().
> 
> I hope a mechanism can be found that behaves the same or nearly the
> same way for all RDMA fabrics.

It depends on the fabric itself, in every network
remote_cm_response_timeout can be different.

> 
> For those who are not NFS-savvy:
> 
> Simple NFS server failover is typically implemented with a heartbeat
> between two similar platforms that both access the same backend
> storage. When one platform fails, the other detects it and takes over
> the failing platform's IP address. Clients detect connection loss
> with the failing platform, and upon reconnection to that IP address
> are transparently directed to the other platform.
> 
> NFS server vendors have tried to extend this behavior to RDMA fabrics,
> with varying degrees of success.
> 
> In addition to enforcing availability SLAs, the time it takes to
> re-establish a working connection is critical for NFSv4 because each
> client maintains a lease to prevent the server from purging open and
> lock state. If the reconnect takes too long, the client's lease is
> jeopardized because other clients can then access files that client
> might still have locked or open.
> 
> 
> --
> Chuck Lever
> 
> 
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bug 214523] New: RDMA Mellanox RoCE drivers are unresponsive to ARP updates during a reconnect
  2021-09-27 12:09     ` Leon Romanovsky
@ 2021-09-27 12:24       ` Jason Gunthorpe
  2021-09-27 12:55         ` Mark Zhang
  2021-09-27 16:14       ` Chuck Lever III
  1 sibling, 1 reply; 10+ messages in thread
From: Jason Gunthorpe @ 2021-09-27 12:24 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: Chuck Lever III, bugzilla-daemon, linux-rdma

On Mon, Sep 27, 2021 at 03:09:44PM +0300, Leon Romanovsky wrote:
> On Sun, Sep 26, 2021 at 05:36:01PM +0000, Chuck Lever III wrote:
> > Hi Leon-
> > 
> > Thanks for the suggestion! More below.
> > 
> > > On Sep 26, 2021, at 4:02 AM, Leon Romanovsky <leon@kernel.org> wrote:
> > > 
> > > On Fri, Sep 24, 2021 at 03:34:32PM +0000, bugzilla-daemon@bugzilla.kernel.org wrote:
> > >> https://bugzilla.kernel.org/show_bug.cgi?id=214523
> > >> 
> > >>            Bug ID: 214523
> > >>           Summary: RDMA Mellanox RoCE drivers are unresponsive to ARP
> > >>                    updates during a reconnect
> > >>           Product: Drivers
> > >>           Version: 2.5
> > >>    Kernel Version: 5.14
> > >>          Hardware: All
> > >>                OS: Linux
> > >>              Tree: Mainline
> > >>            Status: NEW
> > >>          Severity: normal
> > >>          Priority: P1
> > >>         Component: Infiniband/RDMA
> > >>          Assignee: drivers_infiniband-rdma@kernel-bugs.osdl.org
> > >>          Reporter: kolga@netapp.com
> > >>        Regression: No
> > >> 
> > >> RoCE RDMA connection uses CMA protocol to establish an RDMA connection. During
> > >> the setup the code uses hard coded timeout/retry values. These values are used
> > >> for when Connect Request is not being answered to to re-try the request. During
> > >> the re-try attempts the ARP updates of the destination server are ignored.
> > >> Current timeout values lead to 4+minutes long attempt at connecting to a server
> > >> that no longer owns the IP since the ARP update happens. 
> > >> 
> > >> The ask is to make the timeout/retry values configurable via procfs or sysfs.
> > >> This will allow for environments that use RoCE to reduce the timeouts to a more
> > >> reasonable values and be able to react to the ARP updates faster. Other CMA
> > >> users (eg IB or others) can continue to use existing values.
> > 
> > I would rather not add a user-facing tunable. The fabric should
> > be better at detecting addressing changes within a reasonable
> > time. It would be helpful to provide a history of why the ARP
> > timeout is so lax -- do certain ULPs rely on it being long?
> 
> I don't know about ULPs and ARPs, but how to calculate TimeWait is
> described in the spec.
> 
> Regarding tunable, I agree. Because it needs to be per-connection, most
> likely not many people in the world will success to configure it properly.

Maybe we should be disconnecting the cm_id if a gratituous ARP changes
the MAC address? The cm_id is surely broken after that event right?

Jason

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bug 214523] New: RDMA Mellanox RoCE drivers are unresponsive to ARP updates during a reconnect
  2021-09-27 12:24       ` Jason Gunthorpe
@ 2021-09-27 12:55         ` Mark Zhang
  2021-09-27 13:10           ` Jason Gunthorpe
  0 siblings, 1 reply; 10+ messages in thread
From: Mark Zhang @ 2021-09-27 12:55 UTC (permalink / raw)
  To: Jason Gunthorpe, Leon Romanovsky
  Cc: Chuck Lever III, bugzilla-daemon, linux-rdma

On 9/27/2021 8:24 PM, Jason Gunthorpe wrote:
> External email: Use caution opening links or attachments
> 
> 
> On Mon, Sep 27, 2021 at 03:09:44PM +0300, Leon Romanovsky wrote:
>> On Sun, Sep 26, 2021 at 05:36:01PM +0000, Chuck Lever III wrote:
>>> Hi Leon-
>>>
>>> Thanks for the suggestion! More below.
>>>
>>>> On Sep 26, 2021, at 4:02 AM, Leon Romanovsky <leon@kernel.org> wrote:
>>>>
>>>> On Fri, Sep 24, 2021 at 03:34:32PM +0000, bugzilla-daemon@bugzilla.kernel.org wrote:
>>>>> https://bugzilla.kernel.org/show_bug.cgi?id=214523
>>>>>
>>>>>             Bug ID: 214523
>>>>>            Summary: RDMA Mellanox RoCE drivers are unresponsive to ARP
>>>>>                     updates during a reconnect
>>>>>            Product: Drivers
>>>>>            Version: 2.5
>>>>>     Kernel Version: 5.14
>>>>>           Hardware: All
>>>>>                 OS: Linux
>>>>>               Tree: Mainline
>>>>>             Status: NEW
>>>>>           Severity: normal
>>>>>           Priority: P1
>>>>>          Component: Infiniband/RDMA
>>>>>           Assignee: drivers_infiniband-rdma@kernel-bugs.osdl.org
>>>>>           Reporter: kolga@netapp.com
>>>>>         Regression: No
>>>>>
>>>>> RoCE RDMA connection uses CMA protocol to establish an RDMA connection. During
>>>>> the setup the code uses hard coded timeout/retry values. These values are used
>>>>> for when Connect Request is not being answered to to re-try the request. During
>>>>> the re-try attempts the ARP updates of the destination server are ignored.
>>>>> Current timeout values lead to 4+minutes long attempt at connecting to a server
>>>>> that no longer owns the IP since the ARP update happens.
>>>>>
>>>>> The ask is to make the timeout/retry values configurable via procfs or sysfs.
>>>>> This will allow for environments that use RoCE to reduce the timeouts to a more
>>>>> reasonable values and be able to react to the ARP updates faster. Other CMA
>>>>> users (eg IB or others) can continue to use existing values.
>>>
>>> I would rather not add a user-facing tunable. The fabric should
>>> be better at detecting addressing changes within a reasonable
>>> time. It would be helpful to provide a history of why the ARP
>>> timeout is so lax -- do certain ULPs rely on it being long?
>>
>> I don't know about ULPs and ARPs, but how to calculate TimeWait is
>> described in the spec.
>>
>> Regarding tunable, I agree. Because it needs to be per-connection, most
>> likely not many people in the world will success to configure it properly.
> 
> Maybe we should be disconnecting the cm_id if a gratituous ARP changes
> the MAC address? The cm_id is surely broken after that event right?

Is there an event on gratuitous ARP? And we also need to notify 
user-space application, right?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bug 214523] New: RDMA Mellanox RoCE drivers are unresponsive to ARP updates during a reconnect
  2021-09-27 12:55         ` Mark Zhang
@ 2021-09-27 13:10           ` Jason Gunthorpe
  2021-09-27 13:32             ` Haakon Bugge
  0 siblings, 1 reply; 10+ messages in thread
From: Jason Gunthorpe @ 2021-09-27 13:10 UTC (permalink / raw)
  To: Mark Zhang; +Cc: Leon Romanovsky, Chuck Lever III, bugzilla-daemon, linux-rdma

On Mon, Sep 27, 2021 at 08:55:19PM +0800, Mark Zhang wrote:
> On 9/27/2021 8:24 PM, Jason Gunthorpe wrote:
> > External email: Use caution opening links or attachments
> > 
> > 
> > On Mon, Sep 27, 2021 at 03:09:44PM +0300, Leon Romanovsky wrote:
> > > On Sun, Sep 26, 2021 at 05:36:01PM +0000, Chuck Lever III wrote:
> > > > Hi Leon-
> > > > 
> > > > Thanks for the suggestion! More below.
> > > > 
> > > > > On Sep 26, 2021, at 4:02 AM, Leon Romanovsky <leon@kernel.org> wrote:
> > > > > 
> > > > > On Fri, Sep 24, 2021 at 03:34:32PM +0000, bugzilla-daemon@bugzilla.kernel.org wrote:
> > > > > > https://bugzilla.kernel.org/show_bug.cgi?id=214523
> > > > > > 
> > > > > >             Bug ID: 214523
> > > > > >            Summary: RDMA Mellanox RoCE drivers are unresponsive to ARP
> > > > > >                     updates during a reconnect
> > > > > >            Product: Drivers
> > > > > >            Version: 2.5
> > > > > >     Kernel Version: 5.14
> > > > > >           Hardware: All
> > > > > >                 OS: Linux
> > > > > >               Tree: Mainline
> > > > > >             Status: NEW
> > > > > >           Severity: normal
> > > > > >           Priority: P1
> > > > > >          Component: Infiniband/RDMA
> > > > > >           Assignee: drivers_infiniband-rdma@kernel-bugs.osdl.org
> > > > > >           Reporter: kolga@netapp.com
> > > > > >         Regression: No
> > > > > > 
> > > > > > RoCE RDMA connection uses CMA protocol to establish an RDMA connection. During
> > > > > > the setup the code uses hard coded timeout/retry values. These values are used
> > > > > > for when Connect Request is not being answered to to re-try the request. During
> > > > > > the re-try attempts the ARP updates of the destination server are ignored.
> > > > > > Current timeout values lead to 4+minutes long attempt at connecting to a server
> > > > > > that no longer owns the IP since the ARP update happens.
> > > > > > 
> > > > > > The ask is to make the timeout/retry values configurable via procfs or sysfs.
> > > > > > This will allow for environments that use RoCE to reduce the timeouts to a more
> > > > > > reasonable values and be able to react to the ARP updates faster. Other CMA
> > > > > > users (eg IB or others) can continue to use existing values.
> > > > 
> > > > I would rather not add a user-facing tunable. The fabric should
> > > > be better at detecting addressing changes within a reasonable
> > > > time. It would be helpful to provide a history of why the ARP
> > > > timeout is so lax -- do certain ULPs rely on it being long?
> > > 
> > > I don't know about ULPs and ARPs, but how to calculate TimeWait is
> > > described in the spec.
> > > 
> > > Regarding tunable, I agree. Because it needs to be per-connection, most
> > > likely not many people in the world will success to configure it properly.
> > 
> > Maybe we should be disconnecting the cm_id if a gratituous ARP changes
> > the MAC address? The cm_id is surely broken after that event right?
> 
> Is there an event on gratuitous ARP? And we also need to notify user-space
> application, right?

I think there is a net notifier for this?

Userspace will see it via the CM event we'll need to trigger.

Jason

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bug 214523] New: RDMA Mellanox RoCE drivers are unresponsive to ARP updates during a reconnect
  2021-09-27 13:10           ` Jason Gunthorpe
@ 2021-09-27 13:32             ` Haakon Bugge
  2021-10-15  6:35               ` Mark Zhang
  0 siblings, 1 reply; 10+ messages in thread
From: Haakon Bugge @ 2021-09-27 13:32 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Mark Zhang, Leon Romanovsky, Chuck Lever III, bugzilla-daemon,
	OFED mailing list



> On 27 Sep 2021, at 15:10, Jason Gunthorpe <jgg@ziepe.ca> wrote:
> 
> On Mon, Sep 27, 2021 at 08:55:19PM +0800, Mark Zhang wrote:
>> On 9/27/2021 8:24 PM, Jason Gunthorpe wrote:
>>> External email: Use caution opening links or attachments
>>> 
>>> 
>>> On Mon, Sep 27, 2021 at 03:09:44PM +0300, Leon Romanovsky wrote:
>>>> On Sun, Sep 26, 2021 at 05:36:01PM +0000, Chuck Lever III wrote:
>>>>> Hi Leon-
>>>>> 
>>>>> Thanks for the suggestion! More below.
>>>>> 
>>>>>> On Sep 26, 2021, at 4:02 AM, Leon Romanovsky <leon@kernel.org> wrote:
>>>>>> 
>>>>>> On Fri, Sep 24, 2021 at 03:34:32PM +0000, bugzilla-daemon@bugzilla.kernel.org wrote:
>>>>>>> https://bugzilla.kernel.org/show_bug.cgi?id=214523
>>>>>>> 
>>>>>>>            Bug ID: 214523
>>>>>>>           Summary: RDMA Mellanox RoCE drivers are unresponsive to ARP
>>>>>>>                    updates during a reconnect
>>>>>>>           Product: Drivers
>>>>>>>           Version: 2.5
>>>>>>>    Kernel Version: 5.14
>>>>>>>          Hardware: All
>>>>>>>                OS: Linux
>>>>>>>              Tree: Mainline
>>>>>>>            Status: NEW
>>>>>>>          Severity: normal
>>>>>>>          Priority: P1
>>>>>>>         Component: Infiniband/RDMA
>>>>>>>          Assignee: drivers_infiniband-rdma@kernel-bugs.osdl.org
>>>>>>>          Reporter: kolga@netapp.com
>>>>>>>        Regression: No
>>>>>>> 
>>>>>>> RoCE RDMA connection uses CMA protocol to establish an RDMA connection. During
>>>>>>> the setup the code uses hard coded timeout/retry values. These values are used
>>>>>>> for when Connect Request is not being answered to to re-try the request. During
>>>>>>> the re-try attempts the ARP updates of the destination server are ignored.
>>>>>>> Current timeout values lead to 4+minutes long attempt at connecting to a server
>>>>>>> that no longer owns the IP since the ARP update happens.
>>>>>>> 
>>>>>>> The ask is to make the timeout/retry values configurable via procfs or sysfs.
>>>>>>> This will allow for environments that use RoCE to reduce the timeouts to a more
>>>>>>> reasonable values and be able to react to the ARP updates faster. Other CMA
>>>>>>> users (eg IB or others) can continue to use existing values.
>>>>> 
>>>>> I would rather not add a user-facing tunable. The fabric should
>>>>> be better at detecting addressing changes within a reasonable
>>>>> time. It would be helpful to provide a history of why the ARP
>>>>> timeout is so lax -- do certain ULPs rely on it being long?
>>>> 
>>>> I don't know about ULPs and ARPs, but how to calculate TimeWait is
>>>> described in the spec.
>>>> 
>>>> Regarding tunable, I agree. Because it needs to be per-connection, most
>>>> likely not many people in the world will success to configure it properly.
>>> 
>>> Maybe we should be disconnecting the cm_id if a gratituous ARP changes
>>> the MAC address? The cm_id is surely broken after that event right?
>> 
>> Is there an event on gratuitous ARP? And we also need to notify user-space
>> application, right?
> 
> I think there is a net notifier for this?

NETEVENT_NEIGH_UPDATE may be?


Thxs, Håkon

> 
> Userspace will see it via the CM event we'll need to trigger.
> 
> Jason


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bug 214523] New: RDMA Mellanox RoCE drivers are unresponsive to ARP updates during a reconnect
  2021-09-27 12:09     ` Leon Romanovsky
  2021-09-27 12:24       ` Jason Gunthorpe
@ 2021-09-27 16:14       ` Chuck Lever III
  1 sibling, 0 replies; 10+ messages in thread
From: Chuck Lever III @ 2021-09-27 16:14 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: bugzilla-daemon, linux-rdma



> On Sep 27, 2021, at 8:09 AM, Leon Romanovsky <leon@kernel.org> wrote:
> 
> On Sun, Sep 26, 2021 at 05:36:01PM +0000, Chuck Lever III wrote:
>> Hi Leon-
>> 
>> Thanks for the suggestion! More below.
>> 
>>> On Sep 26, 2021, at 4:02 AM, Leon Romanovsky <leon@kernel.org> wrote:
>>> 
>>> On Fri, Sep 24, 2021 at 03:34:32PM +0000, bugzilla-daemon@bugzilla.kernel.org wrote:
>>>> https://bugzilla.kernel.org/show_bug.cgi?id=214523
>>>> 
>>>>           Bug ID: 214523
>>>>          Summary: RDMA Mellanox RoCE drivers are unresponsive to ARP
>>>>                   updates during a reconnect
>>>>          Product: Drivers
>>>>          Version: 2.5
>>>>   Kernel Version: 5.14
>>>>         Hardware: All
>>>>               OS: Linux
>>>>             Tree: Mainline
>>>>           Status: NEW
>>>>         Severity: normal
>>>>         Priority: P1
>>>>        Component: Infiniband/RDMA
>>>>         Assignee: drivers_infiniband-rdma@kernel-bugs.osdl.org
>>>>         Reporter: kolga@netapp.com
>>>>       Regression: No
>>>> 
>>>> RoCE RDMA connection uses CMA protocol to establish an RDMA connection. During
>>>> the setup the code uses hard coded timeout/retry values. These values are used
>>>> for when Connect Request is not being answered to to re-try the request. During
>>>> the re-try attempts the ARP updates of the destination server are ignored.
>>>> Current timeout values lead to 4+minutes long attempt at connecting to a server
>>>> that no longer owns the IP since the ARP update happens. 
>>>> 
>>>> The ask is to make the timeout/retry values configurable via procfs or sysfs.
>>>> This will allow for environments that use RoCE to reduce the timeouts to a more
>>>> reasonable values and be able to react to the ARP updates faster. Other CMA
>>>> users (eg IB or others) can continue to use existing values.
>> 
>> I would rather not add a user-facing tunable. The fabric should
>> be better at detecting addressing changes within a reasonable
>> time. It would be helpful to provide a history of why the ARP
>> timeout is so lax -- do certain ULPs rely on it being long?
> 
> I don't know about ULPs and ARPs, but how to calculate TimeWait is
> described in the spec.
> 
> Regarding tunable, I agree. Because it needs to be per-connection, most
> likely not many people in the world will success to configure it properly.

Exactly.


>>>> The problem exist in all kernel versions but bugzilla is filed for 5.14 kernel.
>>>> 
>>>> The use case is (RoCE-based) NFSoRDMA where a server went down and another
>>>> server was brought up in its place. RDMA layer introduces 4+ minutes in being
>>>> able to re-establish an RDMA connection and let IO resume, due to inability to
>>>> react to the ARP update.
>>> 
>>> RDMA-CM has many different timeouts, so I hope that my answer is for the
>>> right timeout.
>>> 
>>> We probably need to extend rdma_connect() to receive remote_cm_response_timeout
>>> value, so NFSoRDMA will set it to whatever value its appropriate.
>>> 
>>> The timewait will be calculated based it in ib_send_cm_req().
>> 
>> I hope a mechanism can be found that behaves the same or nearly the
>> same way for all RDMA fabrics.
> 
> It depends on the fabric itself, in every network
> remote_cm_response_timeout can be different.

What I mean is I hope a way can be found so that RDMA consumers do
not have to be aware of the fabric differences.


>> For those who are not NFS-savvy:
>> 
>> Simple NFS server failover is typically implemented with a heartbeat
>> between two similar platforms that both access the same backend
>> storage. When one platform fails, the other detects it and takes over
>> the failing platform's IP address. Clients detect connection loss
>> with the failing platform, and upon reconnection to that IP address
>> are transparently directed to the other platform.
>> 
>> NFS server vendors have tried to extend this behavior to RDMA fabrics,
>> with varying degrees of success.
>> 
>> In addition to enforcing availability SLAs, the time it takes to
>> re-establish a working connection is critical for NFSv4 because each
>> client maintains a lease to prevent the server from purging open and
>> lock state. If the reconnect takes too long, the client's lease is
>> jeopardized because other clients can then access files that client
>> might still have locked or open.
>> 
>> 
>> --
>> Chuck Lever

--
Chuck Lever




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Bug 214523] New: RDMA Mellanox RoCE drivers are unresponsive to ARP updates during a reconnect
  2021-09-27 13:32             ` Haakon Bugge
@ 2021-10-15  6:35               ` Mark Zhang
  0 siblings, 0 replies; 10+ messages in thread
From: Mark Zhang @ 2021-10-15  6:35 UTC (permalink / raw)
  To: Haakon Bugge, Jason Gunthorpe
  Cc: Leon Romanovsky, Chuck Lever III, bugzilla-daemon, OFED mailing list

On 9/27/2021 9:32 PM, Haakon Bugge wrote:
> External email: Use caution opening links or attachments
> 
> 
>> On 27 Sep 2021, at 15:10, Jason Gunthorpe <jgg@ziepe.ca> wrote:
>>
>> On Mon, Sep 27, 2021 at 08:55:19PM +0800, Mark Zhang wrote:
>>> On 9/27/2021 8:24 PM, Jason Gunthorpe wrote:
>>>> External email: Use caution opening links or attachments
>>>>
>>>>
>>>> On Mon, Sep 27, 2021 at 03:09:44PM +0300, Leon Romanovsky wrote:
>>>>> On Sun, Sep 26, 2021 at 05:36:01PM +0000, Chuck Lever III wrote:
>>>>>> Hi Leon-
>>>>>>
>>>>>> Thanks for the suggestion! More below.
>>>>>>
>>>>>>> On Sep 26, 2021, at 4:02 AM, Leon Romanovsky <leon@kernel.org> wrote:
>>>>>>>
>>>>>>> On Fri, Sep 24, 2021 at 03:34:32PM +0000, bugzilla-daemon@bugzilla.kernel.org wrote:
>>>>>>>> https://bugzilla.kernel.org/show_bug.cgi?id=214523
>>>>>>>>
>>>>>>>>             Bug ID: 214523
>>>>>>>>            Summary: RDMA Mellanox RoCE drivers are unresponsive to ARP
>>>>>>>>                     updates during a reconnect
>>>>>>>>            Product: Drivers
>>>>>>>>            Version: 2.5
>>>>>>>>     Kernel Version: 5.14
>>>>>>>>           Hardware: All
>>>>>>>>                 OS: Linux
>>>>>>>>               Tree: Mainline
>>>>>>>>             Status: NEW
>>>>>>>>           Severity: normal
>>>>>>>>           Priority: P1
>>>>>>>>          Component: Infiniband/RDMA
>>>>>>>>           Assignee: drivers_infiniband-rdma@kernel-bugs.osdl.org
>>>>>>>>           Reporter: kolga@netapp.com
>>>>>>>>         Regression: No
>>>>>>>>
>>>>>>>> RoCE RDMA connection uses CMA protocol to establish an RDMA connection. During
>>>>>>>> the setup the code uses hard coded timeout/retry values. These values are used
>>>>>>>> for when Connect Request is not being answered to to re-try the request. During
>>>>>>>> the re-try attempts the ARP updates of the destination server are ignored.
>>>>>>>> Current timeout values lead to 4+minutes long attempt at connecting to a server
>>>>>>>> that no longer owns the IP since the ARP update happens.
>>>>>>>>
>>>>>>>> The ask is to make the timeout/retry values configurable via procfs or sysfs.
>>>>>>>> This will allow for environments that use RoCE to reduce the timeouts to a more
>>>>>>>> reasonable values and be able to react to the ARP updates faster. Other CMA
>>>>>>>> users (eg IB or others) can continue to use existing values.
>>>>>>
>>>>>> I would rather not add a user-facing tunable. The fabric should
>>>>>> be better at detecting addressing changes within a reasonable
>>>>>> time. It would be helpful to provide a history of why the ARP
>>>>>> timeout is so lax -- do certain ULPs rely on it being long?
>>>>>
>>>>> I don't know about ULPs and ARPs, but how to calculate TimeWait is
>>>>> described in the spec.
>>>>>
>>>>> Regarding tunable, I agree. Because it needs to be per-connection, most
>>>>> likely not many people in the world will success to configure it properly.
>>>>
>>>> Maybe we should be disconnecting the cm_id if a gratituous ARP changes
>>>> the MAC address? The cm_id is surely broken after that event right?
>>>
>>> Is there an event on gratuitous ARP? And we also need to notify user-space
>>> application, right?
>>
>> I think there is a net notifier for this?
> 
> NETEVENT_NEIGH_UPDATE may be?

How about do it like this:

1. In cma.c we do register_netevent_notifier();
2. On each NETEVENT_NEIGH_UPDATE event, in netevent_callback():
    2.1. Allocate a work (as seems the cb is in interrupt context);
    2.2. In the new work:
           foreach(cm_dev) {
               foreach(id_priv) {
                   if ((id_priv.dst_ip == event.ip) &&
                       (id_priv.dst_addr != event.ha)) {

                       /* Anything more to do? */
                       report_event(RDMA_CM_EVENT_ADDR_CHANGE);
                   }
               }
           }

And I have these questions:
1. Should we do it in cma.c or cm.c?
2. Should we do register only once, or per id? If we do register per id
    then there maybe many ids;
3. If we do it in cm.c, then should we do more like ib_cancel_mad()?
    Or report an event is enough?
4. Need to create a work on each ARP event, would it be a heavy load?
5. Do we need a new event, instead of RDMA_CM_EVENT_ADDR_CHANGE?
6. How about if peer is not in same subnet?

Thank you very much.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-10-15  6:36 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-24 15:34 [Bug 214523] New: RDMA Mellanox RoCE drivers are unresponsive to ARP updates during a reconnect bugzilla-daemon
2021-09-26  8:02 ` Leon Romanovsky
2021-09-26 17:36   ` Chuck Lever III
2021-09-27 12:09     ` Leon Romanovsky
2021-09-27 12:24       ` Jason Gunthorpe
2021-09-27 12:55         ` Mark Zhang
2021-09-27 13:10           ` Jason Gunthorpe
2021-09-27 13:32             ` Haakon Bugge
2021-10-15  6:35               ` Mark Zhang
2021-09-27 16:14       ` Chuck Lever III

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).