All of lore.kernel.org
 help / color / mirror / Atom feed
* Problem re-establishing GSS contexts after a server reboot
@ 2016-07-19 14:51 Chuck Lever
  2016-07-20  9:14 ` Adamson, Andy
  0 siblings, 1 reply; 25+ messages in thread
From: Chuck Lever @ 2016-07-19 14:51 UTC (permalink / raw)
  To: Adamson, Andy; +Cc: Linux NFS Mailing List

Hi Andy-

Thanks for taking the time to discuss this with me. I've
copied linux-nfs to make this e-mail also an upstream bug
report.

As we saw in the network capture, recovery of GSS contexts
after a server reboot fails in certain cases with NFSv4.0
and NFSv4.1 mount points.

The reproducer is a simple program that generates one NFS
WRITE periodically, run while the server repeatedly reboots
(or one cluster head fails over to the other and back). The
goal of the reproducer is to identify problems with state
recovery without a lot of other I/O going on to clutter up
the network capture.

In the failing case, sec=krb5 is specified on the mount
point, and the reproducer is run as root. We've found this
combination fails with both NFSv4.0 and NFSv4.1.

At mount time, the client establishes a GSS context for
lease management operations, which is bound to the client's
NFS service principal and uses GSS service "integrity."
Call this GSS context 1.

When the reproducer starts, a second GSS context is
established for NFS operations associated with that user.
Since the reproducer is running as root, this context is
also bound to the client's NFS service principal, but it
uses the GSS service "none" (reflecting the explicit
request for "sec=krb5"). Call this GSS context 2.

After the server reboots, the client re-establishes a TCP
connection with the server, and performs a RENEW
operation using context 1. Thanks to the server reboot,
contexts 1 and 2 are now stale. The server thus rejects
the RPC with RPCSEC_GSS_CTXPROBLEM.

The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
NULL operation. Call this GSS context 3.

Interestingly, the client does not resend the RENEW
operation at this point (if it did, we wouldn't see this
problem at all).

The client then attempts to resume the reproducer workload.
It sends an NFSv4 WRITE operation, using the first available
GSS context in UID 0's credential cache, which is context 3,
already bound to the client's NFS service principal. But GSS
service "none" is used for this operation, since it is on
behalf of the mount where sec=krb5 was specified.

The RPC is accepted, but the server reports
NFS4ERR_STALE_STATEID, since it has recently rebooted.

The client responds by attempting state recovery. The
first operation it tries is another RENEW. Since this is
a lease management operation, the client looks in UID 0's
credential cache again and finds the recently established
context 3. It tries the RENEW operation using GSS context
3 with GSS service "integrity."

The server rejects the RENEW RPC with AUTH_FAILED, and
the client reports that "check lease failed" and
terminates state recovery.

The client re-drives the WRITE operation with the stale
stateid with predictable results. The client again tries
to recover state by sending a RENEW, and still uses the
same GSS context 3 with service "integrity" and gets the
same result. A (perhaps slow-motion) STALE_STATEID loop
ensues, and the client mount point is deadlocked.

Your analysis was that because the reproducer is run as
root, both the reproducer's I/O operations, and lease
management operations, attempt to use the same GSS context
in UID 0's credential cache, but each uses different GSS
services. The key issue seems to be why, when the mount
is first established, the client is correctly able to
establish two separate GSS contexts for UID 0; but after
a server reboot, the client attempts to use the same GSS
context with two different GSS services.

One solution is to introduce a quick check before a
context is used to see if the GSS service bound to it
matches the GSS service that the caller intends to use.
I'm not sure how that can be done without exposing a window
where another caller requests the use of a GSS context and
grabs the fresh one, before it can be used by our first
caller and bound to its desired GSS service.

Other solutions might be to somehow isolate the credential
cache used for lease management operations, or to split
credential caches by GSS service.


--
Chuck Lever




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-07-19 14:51 Problem re-establishing GSS contexts after a server reboot Chuck Lever
@ 2016-07-20  9:14 ` Adamson, Andy
  2016-07-20 16:56   ` Olga Kornievskaia
  0 siblings, 1 reply; 25+ messages in thread
From: Adamson, Andy @ 2016-07-20  9:14 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Adamson, Andy, Linux NFS Mailing List

DQo+IE9uIEp1bCAxOSwgMjAxNiwgYXQgMTA6NTEgQU0sIENodWNrIExldmVyIDxjaHVjay5sZXZl
ckBvcmFjbGUuY29tPiB3cm90ZToNCj4gDQo+IEhpIEFuZHktDQo+IA0KPiBUaGFua3MgZm9yIHRh
a2luZyB0aGUgdGltZSB0byBkaXNjdXNzIHRoaXMgd2l0aCBtZS4gSSd2ZQ0KPiBjb3BpZWQgbGlu
dXgtbmZzIHRvIG1ha2UgdGhpcyBlLW1haWwgYWxzbyBhbiB1cHN0cmVhbSBidWcNCj4gcmVwb3J0
Lg0KPiANCj4gQXMgd2Ugc2F3IGluIHRoZSBuZXR3b3JrIGNhcHR1cmUsIHJlY292ZXJ5IG9mIEdT
UyBjb250ZXh0cw0KPiBhZnRlciBhIHNlcnZlciByZWJvb3QgZmFpbHMgaW4gY2VydGFpbiBjYXNl
cyB3aXRoIE5GU3Y0LjANCj4gYW5kIE5GU3Y0LjEgbW91bnQgcG9pbnRzLg0KPiANCj4gVGhlIHJl
cHJvZHVjZXIgaXMgYSBzaW1wbGUgcHJvZ3JhbSB0aGF0IGdlbmVyYXRlcyBvbmUgTkZTDQo+IFdS
SVRFIHBlcmlvZGljYWxseSwgcnVuIHdoaWxlIHRoZSBzZXJ2ZXIgcmVwZWF0ZWRseSByZWJvb3Rz
DQo+IChvciBvbmUgY2x1c3RlciBoZWFkIGZhaWxzIG92ZXIgdG8gdGhlIG90aGVyIGFuZCBiYWNr
KS4gVGhlDQo+IGdvYWwgb2YgdGhlIHJlcHJvZHVjZXIgaXMgdG8gaWRlbnRpZnkgcHJvYmxlbXMg
d2l0aCBzdGF0ZQ0KPiByZWNvdmVyeSB3aXRob3V0IGEgbG90IG9mIG90aGVyIEkvTyBnb2luZyBv
biB0byBjbHV0dGVyIHVwDQo+IHRoZSBuZXR3b3JrIGNhcHR1cmUuDQo+IA0KPiBJbiB0aGUgZmFp
bGluZyBjYXNlLCBzZWM9a3JiNSBpcyBzcGVjaWZpZWQgb24gdGhlIG1vdW50DQo+IHBvaW50LCBh
bmQgdGhlIHJlcHJvZHVjZXIgaXMgcnVuIGFzIHJvb3QuIFdlJ3ZlIGZvdW5kIHRoaXMNCj4gY29t
YmluYXRpb24gZmFpbHMgd2l0aCBib3RoIE5GU3Y0LjAgYW5kIE5GU3Y0LjEuDQo+IA0KPiBBdCBt
b3VudCB0aW1lLCB0aGUgY2xpZW50IGVzdGFibGlzaGVzIGEgR1NTIGNvbnRleHQgZm9yDQo+IGxl
YXNlIG1hbmFnZW1lbnQgb3BlcmF0aW9ucywgd2hpY2ggaXMgYm91bmQgdG8gdGhlIGNsaWVudCdz
DQo+IE5GUyBzZXJ2aWNlIHByaW5jaXBhbCBhbmQgdXNlcyBHU1Mgc2VydmljZSAiaW50ZWdyaXR5
LiINCj4gQ2FsbCB0aGlzIEdTUyBjb250ZXh0IDEuDQo+IA0KPiBXaGVuIHRoZSByZXByb2R1Y2Vy
IHN0YXJ0cywgYSBzZWNvbmQgR1NTIGNvbnRleHQgaXMNCj4gZXN0YWJsaXNoZWQgZm9yIE5GUyBv
cGVyYXRpb25zIGFzc29jaWF0ZWQgd2l0aCB0aGF0IHVzZXIuDQo+IFNpbmNlIHRoZSByZXByb2R1
Y2VyIGlzIHJ1bm5pbmcgYXMgcm9vdCwgdGhpcyBjb250ZXh0IGlzDQo+IGFsc28gYm91bmQgdG8g
dGhlIGNsaWVudCdzIE5GUyBzZXJ2aWNlIHByaW5jaXBhbCwgYnV0IGl0DQo+IHVzZXMgdGhlIEdT
UyBzZXJ2aWNlICJub25lIiAocmVmbGVjdGluZyB0aGUgZXhwbGljaXQNCj4gcmVxdWVzdCBmb3Ig
InNlYz1rcmI1IikuIENhbGwgdGhpcyBHU1MgY29udGV4dCAyLg0KPiANCj4gQWZ0ZXIgdGhlIHNl
cnZlciByZWJvb3RzLCB0aGUgY2xpZW50IHJlLWVzdGFibGlzaGVzIGEgVENQDQo+IGNvbm5lY3Rp
b24gd2l0aCB0aGUgc2VydmVyLCBhbmQgcGVyZm9ybXMgYSBSRU5FVw0KPiBvcGVyYXRpb24gdXNp
bmcgY29udGV4dCAxLiBUaGFua3MgdG8gdGhlIHNlcnZlciByZWJvb3QsDQo+IGNvbnRleHRzIDEg
YW5kIDIgYXJlIG5vdyBzdGFsZS4gVGhlIHNlcnZlciB0aHVzIHJlamVjdHMNCj4gdGhlIFJQQyB3
aXRoIFJQQ1NFQ19HU1NfQ1RYUFJPQkxFTS4NCj4gDQo+IFRoZSBjbGllbnQgcGVyZm9ybXMgYSBH
U1NfSU5JVF9TRUNfQ09OVEVYVCB2aWEgYW4gTkZTdjQNCj4gTlVMTCBvcGVyYXRpb24uIENhbGwg
dGhpcyBHU1MgY29udGV4dCAzLg0KPiANCj4gSW50ZXJlc3RpbmdseSwgdGhlIGNsaWVudCBkb2Vz
IG5vdCByZXNlbmQgdGhlIFJFTkVXDQo+IG9wZXJhdGlvbiBhdCB0aGlzIHBvaW50IChpZiBpdCBk
aWQsIHdlIHdvdWxkbid0IHNlZSB0aGlzDQo+IHByb2JsZW0gYXQgYWxsKS4NCj4gDQo+IFRoZSBj
bGllbnQgdGhlbiBhdHRlbXB0cyB0byByZXN1bWUgdGhlIHJlcHJvZHVjZXIgd29ya2xvYWQuDQo+
IEl0IHNlbmRzIGFuIE5GU3Y0IFdSSVRFIG9wZXJhdGlvbiwgdXNpbmcgdGhlIGZpcnN0IGF2YWls
YWJsZQ0KPiBHU1MgY29udGV4dCBpbiBVSUQgMCdzIGNyZWRlbnRpYWwgY2FjaGUsIHdoaWNoIGlz
IGNvbnRleHQgMywNCj4gYWxyZWFkeSBib3VuZCB0byB0aGUgY2xpZW50J3MgTkZTIHNlcnZpY2Ug
cHJpbmNpcGFsLiBCdXQgR1NTDQo+IHNlcnZpY2UgIm5vbmUiIGlzIHVzZWQgZm9yIHRoaXMgb3Bl
cmF0aW9uLCBzaW5jZSBpdCBpcyBvbg0KPiBiZWhhbGYgb2YgdGhlIG1vdW50IHdoZXJlIHNlYz1r
cmI1IHdhcyBzcGVjaWZpZWQuDQo+IA0KPiBUaGUgUlBDIGlzIGFjY2VwdGVkLCBidXQgdGhlIHNl
cnZlciByZXBvcnRzDQo+IE5GUzRFUlJfU1RBTEVfU1RBVEVJRCwgc2luY2UgaXQgaGFzIHJlY2Vu
dGx5IHJlYm9vdGVkLg0KPiANCj4gVGhlIGNsaWVudCByZXNwb25kcyBieSBhdHRlbXB0aW5nIHN0
YXRlIHJlY292ZXJ5LiBUaGUNCj4gZmlyc3Qgb3BlcmF0aW9uIGl0IHRyaWVzIGlzIGFub3RoZXIg
UkVORVcuIFNpbmNlIHRoaXMgaXMNCj4gYSBsZWFzZSBtYW5hZ2VtZW50IG9wZXJhdGlvbiwgdGhl
IGNsaWVudCBsb29rcyBpbiBVSUQgMCdzDQo+IGNyZWRlbnRpYWwgY2FjaGUgYWdhaW4gYW5kIGZp
bmRzIHRoZSByZWNlbnRseSBlc3RhYmxpc2hlZA0KPiBjb250ZXh0IDMuIEl0IHRyaWVzIHRoZSBS
RU5FVyBvcGVyYXRpb24gdXNpbmcgR1NTIGNvbnRleHQNCj4gMyB3aXRoIEdTUyBzZXJ2aWNlICJp
bnRlZ3JpdHkuIg0KPiANCj4gVGhlIHNlcnZlciByZWplY3RzIHRoZSBSRU5FVyBSUEMgd2l0aCBB
VVRIX0ZBSUxFRCwgYW5kDQo+IHRoZSBjbGllbnQgcmVwb3J0cyB0aGF0ICJjaGVjayBsZWFzZSBm
YWlsZWQiIGFuZA0KPiB0ZXJtaW5hdGVzIHN0YXRlIHJlY292ZXJ5Lg0KPiANCj4gVGhlIGNsaWVu
dCByZS1kcml2ZXMgdGhlIFdSSVRFIG9wZXJhdGlvbiB3aXRoIHRoZSBzdGFsZQ0KPiBzdGF0ZWlk
IHdpdGggcHJlZGljdGFibGUgcmVzdWx0cy4gVGhlIGNsaWVudCBhZ2FpbiB0cmllcw0KPiB0byBy
ZWNvdmVyIHN0YXRlIGJ5IHNlbmRpbmcgYSBSRU5FVywgYW5kIHN0aWxsIHVzZXMgdGhlDQo+IHNh
bWUgR1NTIGNvbnRleHQgMyB3aXRoIHNlcnZpY2UgImludGVncml0eSIgYW5kIGdldHMgdGhlDQo+
IHNhbWUgcmVzdWx0LiBBIChwZXJoYXBzIHNsb3ctbW90aW9uKSBTVEFMRV9TVEFURUlEIGxvb3AN
Cj4gZW5zdWVzLCBhbmQgdGhlIGNsaWVudCBtb3VudCBwb2ludCBpcyBkZWFkbG9ja2VkLg0KPiAN
Cj4gWW91ciBhbmFseXNpcyB3YXMgdGhhdCBiZWNhdXNlIHRoZSByZXByb2R1Y2VyIGlzIHJ1biBh
cw0KPiByb290LCBib3RoIHRoZSByZXByb2R1Y2VyJ3MgSS9PIG9wZXJhdGlvbnMsIGFuZCBsZWFz
ZQ0KPiBtYW5hZ2VtZW50IG9wZXJhdGlvbnMsIGF0dGVtcHQgdG8gdXNlIHRoZSBzYW1lIEdTUyBj
b250ZXh0DQo+IGluIFVJRCAwJ3MgY3JlZGVudGlhbCBjYWNoZSwgYnV0IGVhY2ggdXNlcyBkaWZm
ZXJlbnQgR1NTDQo+IHNlcnZpY2VzLg0KDQpBcyBSRkMyMjAzIHN0YXRlcywgIkluIGEgY3JlYXRp
b24gcmVxdWVzdCwgdGhlIHNlcV9udW0gYW5kIHNlcnZpY2UgZmllbGRzIGFyZSB1bmRlZmluZWQg
YW5kIGJvdGggbXVzdCBiZSBpZ25vcmVkIGJ5IHRoZSBzZXJ2ZXLigJ0NClNvIGEgY29udGV4dCBj
cmVhdGlvbiByZXF1ZXN0IHdoaWxlIGtpY2tlZCBvZmYgYnkgYW4gb3BlcmF0aW9uIHdpdGggYSBz
ZXJ2aWNlIGF0dGFjaGVkIChlLmcuIFdSSVRFIHVzZXMgcnBjX2dzc19zdmNfbm9uZSBhbmQgUkVO
RVcgdXNlcyBycGNfZ3NzX3N2Y19pbnRlZ3JpdHkpLCBjYW4gYmUgdXNlZCBieSBlaXRoZXIgc2Vy
dmljZSBsZXZlbC4NCkFGQUlDUyBhIHNpbmdsZSBHU1MgY29udGV4dCBjb3VsZCBpbiB0aGVvcnkg
YmUgdXNlZCBmb3IgYWxsIHNlcnZpY2UgbGV2ZWxzLCBidXQgaW4gcHJhY3RpY2UsIEdTUyBjb250
ZXh0cyBhcmUgcmVzdHJpY3RlZCB0byBhIHNlcnZpY2UgbGV2ZWwgKGJ5IGNsaWVudD8gYnkgc2Vy
dmVyPyApIG9uY2UgdGhleSBhcmUgdXNlZC4NCg0KDQo+IFRoZSBrZXkgaXNzdWUgc2VlbXMgdG8g
YmUgd2h5LCB3aGVuIHRoZSBtb3VudA0KPiBpcyBmaXJzdCBlc3RhYmxpc2hlZCwgdGhlIGNsaWVu
dCBpcyBjb3JyZWN0bHkgYWJsZSB0bw0KPiBlc3RhYmxpc2ggdHdvIHNlcGFyYXRlIEdTUyBjb250
ZXh0cyBmb3IgVUlEIDA7IGJ1dCBhZnRlcg0KPiBhIHNlcnZlciByZWJvb3QsIHRoZSBjbGllbnQg
YXR0ZW1wdHMgdG8gdXNlIHRoZSBzYW1lIEdTUw0KPiBjb250ZXh0IHdpdGggdHdvIGRpZmZlcmVu
dCBHU1Mgc2VydmljZXMuDQoNCkkgc3BlY3VsYXRlIHRoYXQgaXQgaXMgYSByYWNlIGJldHdlZW4g
dGhlIFdSSVRFIGFuZCB0aGUgUkVORVcgdG8gdXNlIHRoZSBzYW1lIG5ld2x5IGNyZWF0ZWQgR1NT
IGNvbnRleHQgdGhhdCBoYXMgbm90IGJlZW4gdXNlZCB5ZXQsIGFuZCBzbyBoYXMgbm8gYXNzaWdu
ZWQgc2VydmljZSBsZXZlbCwgYW5kIHRoZSB0d28gcmVxdWVzdHMgcmFjZSB0byBzZXQgdGhlIHNl
cnZpY2UgbGV2ZWwuDQoNCuKAlD5BbmR5DQo+IA0KPiBPbmUgc29sdXRpb24gaXMgdG8gaW50cm9k
dWNlIGEgcXVpY2sgY2hlY2sgYmVmb3JlIGENCj4gY29udGV4dCBpcyB1c2VkIHRvIHNlZSBpZiB0
aGUgR1NTIHNlcnZpY2UgYm91bmQgdG8gaXQNCj4gbWF0Y2hlcyB0aGUgR1NTIHNlcnZpY2UgdGhh
dCB0aGUgY2FsbGVyIGludGVuZHMgdG8gdXNlLg0KPiBJJ20gbm90IHN1cmUgaG93IHRoYXQgY2Fu
IGJlIGRvbmUgd2l0aG91dCBleHBvc2luZyBhIHdpbmRvdw0KPiB3aGVyZSBhbm90aGVyIGNhbGxl
ciByZXF1ZXN0cyB0aGUgdXNlIG9mIGEgR1NTIGNvbnRleHQgYW5kDQo+IGdyYWJzIHRoZSBmcmVz
aCBvbmUsIGJlZm9yZSBpdCBjYW4gYmUgdXNlZCBieSBvdXIgZmlyc3QNCj4gY2FsbGVyIGFuZCBi
b3VuZCB0byBpdHMgZGVzaXJlZCBHU1Mgc2VydmljZS4NCj4gDQo+IE90aGVyIHNvbHV0aW9ucyBt
aWdodCBiZSB0byBzb21laG93IGlzb2xhdGUgdGhlIGNyZWRlbnRpYWwNCj4gY2FjaGUgdXNlZCBm
b3IgbGVhc2UgbWFuYWdlbWVudCBvcGVyYXRpb25zLCBvciB0byBzcGxpdA0KPiBjcmVkZW50aWFs
IGNhY2hlcyBieSBHU1Mgc2VydmljZS4NCj4gDQo+IA0KPiAtLQ0KPiBDaHVjayBMZXZlcg0KPiAN
Cj4gDQo+IA0KDQo=

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-07-20  9:14 ` Adamson, Andy
@ 2016-07-20 16:56   ` Olga Kornievskaia
  2016-07-21  6:55     ` Chuck Lever
  0 siblings, 1 reply; 25+ messages in thread
From: Olga Kornievskaia @ 2016-07-20 16:56 UTC (permalink / raw)
  To: Adamson, Andy; +Cc: Chuck Lever, Linux NFS Mailing List

On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
<William.Adamson@netapp.com> wrote:
>
>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>
>> Hi Andy-
>>
>> Thanks for taking the time to discuss this with me. I've
>> copied linux-nfs to make this e-mail also an upstream bug
>> report.
>>
>> As we saw in the network capture, recovery of GSS contexts
>> after a server reboot fails in certain cases with NFSv4.0
>> and NFSv4.1 mount points.
>>
>> The reproducer is a simple program that generates one NFS
>> WRITE periodically, run while the server repeatedly reboots
>> (or one cluster head fails over to the other and back). The
>> goal of the reproducer is to identify problems with state
>> recovery without a lot of other I/O going on to clutter up
>> the network capture.
>>
>> In the failing case, sec=krb5 is specified on the mount
>> point, and the reproducer is run as root. We've found this
>> combination fails with both NFSv4.0 and NFSv4.1.
>>
>> At mount time, the client establishes a GSS context for
>> lease management operations, which is bound to the client's
>> NFS service principal and uses GSS service "integrity."
>> Call this GSS context 1.
>>
>> When the reproducer starts, a second GSS context is
>> established for NFS operations associated with that user.
>> Since the reproducer is running as root, this context is
>> also bound to the client's NFS service principal, but it
>> uses the GSS service "none" (reflecting the explicit
>> request for "sec=krb5"). Call this GSS context 2.
>>
>> After the server reboots, the client re-establishes a TCP
>> connection with the server, and performs a RENEW
>> operation using context 1. Thanks to the server reboot,
>> contexts 1 and 2 are now stale. The server thus rejects
>> the RPC with RPCSEC_GSS_CTXPROBLEM.
>>
>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
>> NULL operation. Call this GSS context 3.
>>
>> Interestingly, the client does not resend the RENEW
>> operation at this point (if it did, we wouldn't see this
>> problem at all).
>>
>> The client then attempts to resume the reproducer workload.
>> It sends an NFSv4 WRITE operation, using the first available
>> GSS context in UID 0's credential cache, which is context 3,
>> already bound to the client's NFS service principal. But GSS
>> service "none" is used for this operation, since it is on
>> behalf of the mount where sec=krb5 was specified.
>>
>> The RPC is accepted, but the server reports
>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
>>
>> The client responds by attempting state recovery. The
>> first operation it tries is another RENEW. Since this is
>> a lease management operation, the client looks in UID 0's
>> credential cache again and finds the recently established
>> context 3. It tries the RENEW operation using GSS context
>> 3 with GSS service "integrity."
>>
>> The server rejects the RENEW RPC with AUTH_FAILED, and
>> the client reports that "check lease failed" and
>> terminates state recovery.
>>
>> The client re-drives the WRITE operation with the stale
>> stateid with predictable results. The client again tries
>> to recover state by sending a RENEW, and still uses the
>> same GSS context 3 with service "integrity" and gets the
>> same result. A (perhaps slow-motion) STALE_STATEID loop
>> ensues, and the client mount point is deadlocked.
>>
>> Your analysis was that because the reproducer is run as
>> root, both the reproducer's I/O operations, and lease
>> management operations, attempt to use the same GSS context
>> in UID 0's credential cache, but each uses different GSS
>> services.
>
> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
>
>
>> The key issue seems to be why, when the mount
>> is first established, the client is correctly able to
>> establish two separate GSS contexts for UID 0; but after
>> a server reboot, the client attempts to use the same GSS
>> context with two different GSS services.
>
> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.

I agree with Andy. It must be a tight race. I have tried to reproduce
your scenario and in my tests of rebooting the server all recover
correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
the new context is established and then RENEW using integrity service
is retried with the new context which gets ERR_STALE_CLIENTID which
then client recovers from. If it's an operation (I have a GETATTR)
that gets AUTH_ERR, then it gets new context and is retried using none
service. Then RENEW gets its own AUTH_ERR as it uses a different
context, a new context is gotten, RENEW is retried over integrity and
gets ERR_STALE_CLIENTID which it recovers from.


>
> —>Andy
>>
>> One solution is to introduce a quick check before a
>> context is used to see if the GSS service bound to it
>> matches the GSS service that the caller intends to use.
>> I'm not sure how that can be done without exposing a window
>> where another caller requests the use of a GSS context and
>> grabs the fresh one, before it can be used by our first
>> caller and bound to its desired GSS service.
>>
>> Other solutions might be to somehow isolate the credential
>> cache used for lease management operations, or to split
>> credential caches by GSS service.
>>
>>
>> --
>> Chuck Lever
>>
>>
>>
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-07-20 16:56   ` Olga Kornievskaia
@ 2016-07-21  6:55     ` Chuck Lever
  2016-07-21 16:04       ` Olga Kornievskaia
  0 siblings, 1 reply; 25+ messages in thread
From: Chuck Lever @ 2016-07-21  6:55 UTC (permalink / raw)
  To: Olga Kornievskaia; +Cc: Adamson, Andy, Linux NFS Mailing List


> On Jul 20, 2016, at 6:56 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> 
> On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
> <William.Adamson@netapp.com> wrote:
>> 
>>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>> 
>>> Hi Andy-
>>> 
>>> Thanks for taking the time to discuss this with me. I've
>>> copied linux-nfs to make this e-mail also an upstream bug
>>> report.
>>> 
>>> As we saw in the network capture, recovery of GSS contexts
>>> after a server reboot fails in certain cases with NFSv4.0
>>> and NFSv4.1 mount points.
>>> 
>>> The reproducer is a simple program that generates one NFS
>>> WRITE periodically, run while the server repeatedly reboots
>>> (or one cluster head fails over to the other and back). The
>>> goal of the reproducer is to identify problems with state
>>> recovery without a lot of other I/O going on to clutter up
>>> the network capture.
>>> 
>>> In the failing case, sec=krb5 is specified on the mount
>>> point, and the reproducer is run as root. We've found this
>>> combination fails with both NFSv4.0 and NFSv4.1.
>>> 
>>> At mount time, the client establishes a GSS context for
>>> lease management operations, which is bound to the client's
>>> NFS service principal and uses GSS service "integrity."
>>> Call this GSS context 1.
>>> 
>>> When the reproducer starts, a second GSS context is
>>> established for NFS operations associated with that user.
>>> Since the reproducer is running as root, this context is
>>> also bound to the client's NFS service principal, but it
>>> uses the GSS service "none" (reflecting the explicit
>>> request for "sec=krb5"). Call this GSS context 2.
>>> 
>>> After the server reboots, the client re-establishes a TCP
>>> connection with the server, and performs a RENEW
>>> operation using context 1. Thanks to the server reboot,
>>> contexts 1 and 2 are now stale. The server thus rejects
>>> the RPC with RPCSEC_GSS_CTXPROBLEM.
>>> 
>>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
>>> NULL operation. Call this GSS context 3.
>>> 
>>> Interestingly, the client does not resend the RENEW
>>> operation at this point (if it did, we wouldn't see this
>>> problem at all).
>>> 
>>> The client then attempts to resume the reproducer workload.
>>> It sends an NFSv4 WRITE operation, using the first available
>>> GSS context in UID 0's credential cache, which is context 3,
>>> already bound to the client's NFS service principal. But GSS
>>> service "none" is used for this operation, since it is on
>>> behalf of the mount where sec=krb5 was specified.
>>> 
>>> The RPC is accepted, but the server reports
>>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
>>> 
>>> The client responds by attempting state recovery. The
>>> first operation it tries is another RENEW. Since this is
>>> a lease management operation, the client looks in UID 0's
>>> credential cache again and finds the recently established
>>> context 3. It tries the RENEW operation using GSS context
>>> 3 with GSS service "integrity."
>>> 
>>> The server rejects the RENEW RPC with AUTH_FAILED, and
>>> the client reports that "check lease failed" and
>>> terminates state recovery.
>>> 
>>> The client re-drives the WRITE operation with the stale
>>> stateid with predictable results. The client again tries
>>> to recover state by sending a RENEW, and still uses the
>>> same GSS context 3 with service "integrity" and gets the
>>> same result. A (perhaps slow-motion) STALE_STATEID loop
>>> ensues, and the client mount point is deadlocked.
>>> 
>>> Your analysis was that because the reproducer is run as
>>> root, both the reproducer's I/O operations, and lease
>>> management operations, attempt to use the same GSS context
>>> in UID 0's credential cache, but each uses different GSS
>>> services.
>> 
>> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
>> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
>> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
>> 
>> 
>>> The key issue seems to be why, when the mount
>>> is first established, the client is correctly able to
>>> establish two separate GSS contexts for UID 0; but after
>>> a server reboot, the client attempts to use the same GSS
>>> context with two different GSS services.
>> 
>> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.
> 
> I agree with Andy. It must be a tight race.

In one capture I see something like this after
the server restarts:

SYN
SYN, ACK
ACK
C WRITE
C SEQUENCE
R WRITE -> CTX_PROBLEM
R SEQUENCE -> CTX_PROBLEM
C NULL (KRB5_AP_REQ)
R NULL (KRB5_AP_REP)
C WRITE
C SEQUENCE
R WRITE -> NFS4ERR_STALE_STATEID
R SEQUENCE -> AUTH_FAILED

Andy's theory neatly explains this behavior.


> I have tried to reproduce
> your scenario and in my tests of rebooting the server all recover
> correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
> the new context is established and then RENEW using integrity service
> is retried with the new context which gets ERR_STALE_CLIENTID which
> then client recovers from. If it's an operation (I have a GETATTR)
> that gets AUTH_ERR, then it gets new context and is retried using none
> service. Then RENEW gets its own AUTH_ERR as it uses a different
> context, a new context is gotten, RENEW is retried over integrity and
> gets ERR_STALE_CLIENTID which it recovers from.

If one operation is allowed to complete, then
the other will always recognize that another
fresh GSS context is needed. If two are sent
at the same time, they race and one always
fails.

Helen's test includes a second idle mount point
(sec=krb5i) and maybe that is needed to trigger
the race?


>> —>Andy
>>> 
>>> One solution is to introduce a quick check before a
>>> context is used to see if the GSS service bound to it
>>> matches the GSS service that the caller intends to use.
>>> I'm not sure how that can be done without exposing a window
>>> where another caller requests the use of a GSS context and
>>> grabs the fresh one, before it can be used by our first
>>> caller and bound to its desired GSS service.
>>> 
>>> Other solutions might be to somehow isolate the credential
>>> cache used for lease management operations, or to split
>>> credential caches by GSS service.
>>> 
>>> 
>>> --
>>> Chuck Lever
>>> 
>>> 
>>> 
>> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
Chuck Lever




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-07-21  6:55     ` Chuck Lever
@ 2016-07-21 16:04       ` Olga Kornievskaia
  2016-07-21 17:56         ` Chuck Lever
  0 siblings, 1 reply; 25+ messages in thread
From: Olga Kornievskaia @ 2016-07-21 16:04 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Adamson, Andy, Linux NFS Mailing List

On Thu, Jul 21, 2016 at 2:55 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>
>> On Jul 20, 2016, at 6:56 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>
>> On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
>> <William.Adamson@netapp.com> wrote:
>>>
>>>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>
>>>> Hi Andy-
>>>>
>>>> Thanks for taking the time to discuss this with me. I've
>>>> copied linux-nfs to make this e-mail also an upstream bug
>>>> report.
>>>>
>>>> As we saw in the network capture, recovery of GSS contexts
>>>> after a server reboot fails in certain cases with NFSv4.0
>>>> and NFSv4.1 mount points.
>>>>
>>>> The reproducer is a simple program that generates one NFS
>>>> WRITE periodically, run while the server repeatedly reboots
>>>> (or one cluster head fails over to the other and back). The
>>>> goal of the reproducer is to identify problems with state
>>>> recovery without a lot of other I/O going on to clutter up
>>>> the network capture.
>>>>
>>>> In the failing case, sec=krb5 is specified on the mount
>>>> point, and the reproducer is run as root. We've found this
>>>> combination fails with both NFSv4.0 and NFSv4.1.
>>>>
>>>> At mount time, the client establishes a GSS context for
>>>> lease management operations, which is bound to the client's
>>>> NFS service principal and uses GSS service "integrity."
>>>> Call this GSS context 1.
>>>>
>>>> When the reproducer starts, a second GSS context is
>>>> established for NFS operations associated with that user.
>>>> Since the reproducer is running as root, this context is
>>>> also bound to the client's NFS service principal, but it
>>>> uses the GSS service "none" (reflecting the explicit
>>>> request for "sec=krb5"). Call this GSS context 2.
>>>>
>>>> After the server reboots, the client re-establishes a TCP
>>>> connection with the server, and performs a RENEW
>>>> operation using context 1. Thanks to the server reboot,
>>>> contexts 1 and 2 are now stale. The server thus rejects
>>>> the RPC with RPCSEC_GSS_CTXPROBLEM.
>>>>
>>>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
>>>> NULL operation. Call this GSS context 3.
>>>>
>>>> Interestingly, the client does not resend the RENEW
>>>> operation at this point (if it did, we wouldn't see this
>>>> problem at all).
>>>>
>>>> The client then attempts to resume the reproducer workload.
>>>> It sends an NFSv4 WRITE operation, using the first available
>>>> GSS context in UID 0's credential cache, which is context 3,
>>>> already bound to the client's NFS service principal. But GSS
>>>> service "none" is used for this operation, since it is on
>>>> behalf of the mount where sec=krb5 was specified.
>>>>
>>>> The RPC is accepted, but the server reports
>>>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
>>>>
>>>> The client responds by attempting state recovery. The
>>>> first operation it tries is another RENEW. Since this is
>>>> a lease management operation, the client looks in UID 0's
>>>> credential cache again and finds the recently established
>>>> context 3. It tries the RENEW operation using GSS context
>>>> 3 with GSS service "integrity."
>>>>
>>>> The server rejects the RENEW RPC with AUTH_FAILED, and
>>>> the client reports that "check lease failed" and
>>>> terminates state recovery.
>>>>
>>>> The client re-drives the WRITE operation with the stale
>>>> stateid with predictable results. The client again tries
>>>> to recover state by sending a RENEW, and still uses the
>>>> same GSS context 3 with service "integrity" and gets the
>>>> same result. A (perhaps slow-motion) STALE_STATEID loop
>>>> ensues, and the client mount point is deadlocked.
>>>>
>>>> Your analysis was that because the reproducer is run as
>>>> root, both the reproducer's I/O operations, and lease
>>>> management operations, attempt to use the same GSS context
>>>> in UID 0's credential cache, but each uses different GSS
>>>> services.
>>>
>>> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
>>> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
>>> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
>>>
>>>
>>>> The key issue seems to be why, when the mount
>>>> is first established, the client is correctly able to
>>>> establish two separate GSS contexts for UID 0; but after
>>>> a server reboot, the client attempts to use the same GSS
>>>> context with two different GSS services.
>>>
>>> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.
>>
>> I agree with Andy. It must be a tight race.
>
> In one capture I see something like this after
> the server restarts:
>
> SYN
> SYN, ACK
> ACK
> C WRITE
> C SEQUENCE
> R WRITE -> CTX_PROBLEM
> R SEQUENCE -> CTX_PROBLEM
> C NULL (KRB5_AP_REQ)
> R NULL (KRB5_AP_REP)
> C WRITE
> C SEQUENCE
> R WRITE -> NFS4ERR_STALE_STATEID
> R SEQUENCE -> AUTH_FAILED
>
> Andy's theory neatly explains this behavior.
>
>
>> I have tried to reproduce
>> your scenario and in my tests of rebooting the server all recover
>> correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
>> the new context is established and then RENEW using integrity service
>> is retried with the new context which gets ERR_STALE_CLIENTID which
>> then client recovers from. If it's an operation (I have a GETATTR)
>> that gets AUTH_ERR, then it gets new context and is retried using none
>> service. Then RENEW gets its own AUTH_ERR as it uses a different
>> context, a new context is gotten, RENEW is retried over integrity and
>> gets ERR_STALE_CLIENTID which it recovers from.
>
> If one operation is allowed to complete, then
> the other will always recognize that another
> fresh GSS context is needed. If two are sent
> at the same time, they race and one always
> fails.
>
> Helen's test includes a second idle mount point
> (sec=krb5i) and maybe that is needed to trigger
> the race?
>

Chuck, any chance to get "rpcdebug -m rpc auth" output during the
failure (gssd optionally) (i realize that it might alter the timings
and not hit the issue but worth a shot)?

>
>>> —>Andy
>>>>
>>>> One solution is to introduce a quick check before a
>>>> context is used to see if the GSS service bound to it
>>>> matches the GSS service that the caller intends to use.
>>>> I'm not sure how that can be done without exposing a window
>>>> where another caller requests the use of a GSS context and
>>>> grabs the fresh one, before it can be used by our first
>>>> caller and bound to its desired GSS service.
>>>>
>>>> Other solutions might be to somehow isolate the credential
>>>> cache used for lease management operations, or to split
>>>> credential caches by GSS service.
>>>>
>>>>
>>>> --
>>>> Chuck Lever
>>>>
>>>>
>>>>
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> --
> Chuck Lever
>
>
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-07-21 16:04       ` Olga Kornievskaia
@ 2016-07-21 17:56         ` Chuck Lever
  2016-07-21 19:54           ` Olga Kornievskaia
  0 siblings, 1 reply; 25+ messages in thread
From: Chuck Lever @ 2016-07-21 17:56 UTC (permalink / raw)
  To: Olga Kornievskaia; +Cc: Adamson, Andy, Linux NFS Mailing List


> On Jul 21, 2016, at 6:04 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> 
> On Thu, Jul 21, 2016 at 2:55 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>> 
>>> On Jul 20, 2016, at 6:56 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>> 
>>> On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
>>> <William.Adamson@netapp.com> wrote:
>>>> 
>>>>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>> 
>>>>> Hi Andy-
>>>>> 
>>>>> Thanks for taking the time to discuss this with me. I've
>>>>> copied linux-nfs to make this e-mail also an upstream bug
>>>>> report.
>>>>> 
>>>>> As we saw in the network capture, recovery of GSS contexts
>>>>> after a server reboot fails in certain cases with NFSv4.0
>>>>> and NFSv4.1 mount points.
>>>>> 
>>>>> The reproducer is a simple program that generates one NFS
>>>>> WRITE periodically, run while the server repeatedly reboots
>>>>> (or one cluster head fails over to the other and back). The
>>>>> goal of the reproducer is to identify problems with state
>>>>> recovery without a lot of other I/O going on to clutter up
>>>>> the network capture.
>>>>> 
>>>>> In the failing case, sec=krb5 is specified on the mount
>>>>> point, and the reproducer is run as root. We've found this
>>>>> combination fails with both NFSv4.0 and NFSv4.1.
>>>>> 
>>>>> At mount time, the client establishes a GSS context for
>>>>> lease management operations, which is bound to the client's
>>>>> NFS service principal and uses GSS service "integrity."
>>>>> Call this GSS context 1.
>>>>> 
>>>>> When the reproducer starts, a second GSS context is
>>>>> established for NFS operations associated with that user.
>>>>> Since the reproducer is running as root, this context is
>>>>> also bound to the client's NFS service principal, but it
>>>>> uses the GSS service "none" (reflecting the explicit
>>>>> request for "sec=krb5"). Call this GSS context 2.
>>>>> 
>>>>> After the server reboots, the client re-establishes a TCP
>>>>> connection with the server, and performs a RENEW
>>>>> operation using context 1. Thanks to the server reboot,
>>>>> contexts 1 and 2 are now stale. The server thus rejects
>>>>> the RPC with RPCSEC_GSS_CTXPROBLEM.
>>>>> 
>>>>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
>>>>> NULL operation. Call this GSS context 3.
>>>>> 
>>>>> Interestingly, the client does not resend the RENEW
>>>>> operation at this point (if it did, we wouldn't see this
>>>>> problem at all).
>>>>> 
>>>>> The client then attempts to resume the reproducer workload.
>>>>> It sends an NFSv4 WRITE operation, using the first available
>>>>> GSS context in UID 0's credential cache, which is context 3,
>>>>> already bound to the client's NFS service principal. But GSS
>>>>> service "none" is used for this operation, since it is on
>>>>> behalf of the mount where sec=krb5 was specified.
>>>>> 
>>>>> The RPC is accepted, but the server reports
>>>>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
>>>>> 
>>>>> The client responds by attempting state recovery. The
>>>>> first operation it tries is another RENEW. Since this is
>>>>> a lease management operation, the client looks in UID 0's
>>>>> credential cache again and finds the recently established
>>>>> context 3. It tries the RENEW operation using GSS context
>>>>> 3 with GSS service "integrity."
>>>>> 
>>>>> The server rejects the RENEW RPC with AUTH_FAILED, and
>>>>> the client reports that "check lease failed" and
>>>>> terminates state recovery.
>>>>> 
>>>>> The client re-drives the WRITE operation with the stale
>>>>> stateid with predictable results. The client again tries
>>>>> to recover state by sending a RENEW, and still uses the
>>>>> same GSS context 3 with service "integrity" and gets the
>>>>> same result. A (perhaps slow-motion) STALE_STATEID loop
>>>>> ensues, and the client mount point is deadlocked.
>>>>> 
>>>>> Your analysis was that because the reproducer is run as
>>>>> root, both the reproducer's I/O operations, and lease
>>>>> management operations, attempt to use the same GSS context
>>>>> in UID 0's credential cache, but each uses different GSS
>>>>> services.
>>>> 
>>>> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
>>>> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
>>>> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
>>>> 
>>>> 
>>>>> The key issue seems to be why, when the mount
>>>>> is first established, the client is correctly able to
>>>>> establish two separate GSS contexts for UID 0; but after
>>>>> a server reboot, the client attempts to use the same GSS
>>>>> context with two different GSS services.
>>>> 
>>>> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.
>>> 
>>> I agree with Andy. It must be a tight race.
>> 
>> In one capture I see something like this after
>> the server restarts:
>> 
>> SYN
>> SYN, ACK
>> ACK
>> C WRITE
>> C SEQUENCE
>> R WRITE -> CTX_PROBLEM
>> R SEQUENCE -> CTX_PROBLEM
>> C NULL (KRB5_AP_REQ)
>> R NULL (KRB5_AP_REP)
>> C WRITE
>> C SEQUENCE
>> R WRITE -> NFS4ERR_STALE_STATEID
>> R SEQUENCE -> AUTH_FAILED
>> 
>> Andy's theory neatly explains this behavior.
>> 
>> 
>>> I have tried to reproduce
>>> your scenario and in my tests of rebooting the server all recover
>>> correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
>>> the new context is established and then RENEW using integrity service
>>> is retried with the new context which gets ERR_STALE_CLIENTID which
>>> then client recovers from. If it's an operation (I have a GETATTR)
>>> that gets AUTH_ERR, then it gets new context and is retried using none
>>> service. Then RENEW gets its own AUTH_ERR as it uses a different
>>> context, a new context is gotten, RENEW is retried over integrity and
>>> gets ERR_STALE_CLIENTID which it recovers from.
>> 
>> If one operation is allowed to complete, then
>> the other will always recognize that another
>> fresh GSS context is needed. If two are sent
>> at the same time, they race and one always
>> fails.
>> 
>> Helen's test includes a second idle mount point
>> (sec=krb5i) and maybe that is needed to trigger
>> the race?
>> 
> 
> Chuck, any chance to get "rpcdebug -m rpc auth" output during the
> failure (gssd optionally) (i realize that it might alter the timings
> and not hit the issue but worth a shot)?

I'm sure that's fine. An internal tester hit this,
not a customer, so I will ask.

I agree, though, that timing might be a problem:
these systems all have real serial consoles via
iLOM, so /v/l/m traffic does bring everything to
a standstill.

Meanwhile, what's you're opinion about AUTH_FAILED?
Should the server return RPCSEC_GSS_CTXPROBLEM
in this case instead? If it did, do you think
the Linux client would recover by creating a
replacement GSS context?



>>>> —>Andy
>>>>> 
>>>>> One solution is to introduce a quick check before a
>>>>> context is used to see if the GSS service bound to it
>>>>> matches the GSS service that the caller intends to use.
>>>>> I'm not sure how that can be done without exposing a window
>>>>> where another caller requests the use of a GSS context and
>>>>> grabs the fresh one, before it can be used by our first
>>>>> caller and bound to its desired GSS service.
>>>>> 
>>>>> Other solutions might be to somehow isolate the credential
>>>>> cache used for lease management operations, or to split
>>>>> credential caches by GSS service.
>>>>> 
>>>>> 
>>>>> --
>>>>> Chuck Lever
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> 
>> --
>> Chuck Lever
>> 
>> 
>> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
Chuck Lever




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-07-21 17:56         ` Chuck Lever
@ 2016-07-21 19:54           ` Olga Kornievskaia
  2016-07-21 20:46             ` Olga Kornievskaia
  0 siblings, 1 reply; 25+ messages in thread
From: Olga Kornievskaia @ 2016-07-21 19:54 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Adamson, Andy, Linux NFS Mailing List

On Thu, Jul 21, 2016 at 1:56 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>
>> On Jul 21, 2016, at 6:04 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>
>> On Thu, Jul 21, 2016 at 2:55 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>
>>>> On Jul 20, 2016, at 6:56 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>
>>>> On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
>>>> <William.Adamson@netapp.com> wrote:
>>>>>
>>>>>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>
>>>>>> Hi Andy-
>>>>>>
>>>>>> Thanks for taking the time to discuss this with me. I've
>>>>>> copied linux-nfs to make this e-mail also an upstream bug
>>>>>> report.
>>>>>>
>>>>>> As we saw in the network capture, recovery of GSS contexts
>>>>>> after a server reboot fails in certain cases with NFSv4.0
>>>>>> and NFSv4.1 mount points.
>>>>>>
>>>>>> The reproducer is a simple program that generates one NFS
>>>>>> WRITE periodically, run while the server repeatedly reboots
>>>>>> (or one cluster head fails over to the other and back). The
>>>>>> goal of the reproducer is to identify problems with state
>>>>>> recovery without a lot of other I/O going on to clutter up
>>>>>> the network capture.
>>>>>>
>>>>>> In the failing case, sec=krb5 is specified on the mount
>>>>>> point, and the reproducer is run as root. We've found this
>>>>>> combination fails with both NFSv4.0 and NFSv4.1.
>>>>>>
>>>>>> At mount time, the client establishes a GSS context for
>>>>>> lease management operations, which is bound to the client's
>>>>>> NFS service principal and uses GSS service "integrity."
>>>>>> Call this GSS context 1.
>>>>>>
>>>>>> When the reproducer starts, a second GSS context is
>>>>>> established for NFS operations associated with that user.
>>>>>> Since the reproducer is running as root, this context is
>>>>>> also bound to the client's NFS service principal, but it
>>>>>> uses the GSS service "none" (reflecting the explicit
>>>>>> request for "sec=krb5"). Call this GSS context 2.
>>>>>>
>>>>>> After the server reboots, the client re-establishes a TCP
>>>>>> connection with the server, and performs a RENEW
>>>>>> operation using context 1. Thanks to the server reboot,
>>>>>> contexts 1 and 2 are now stale. The server thus rejects
>>>>>> the RPC with RPCSEC_GSS_CTXPROBLEM.
>>>>>>
>>>>>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
>>>>>> NULL operation. Call this GSS context 3.
>>>>>>
>>>>>> Interestingly, the client does not resend the RENEW
>>>>>> operation at this point (if it did, we wouldn't see this
>>>>>> problem at all).
>>>>>>
>>>>>> The client then attempts to resume the reproducer workload.
>>>>>> It sends an NFSv4 WRITE operation, using the first available
>>>>>> GSS context in UID 0's credential cache, which is context 3,
>>>>>> already bound to the client's NFS service principal. But GSS
>>>>>> service "none" is used for this operation, since it is on
>>>>>> behalf of the mount where sec=krb5 was specified.
>>>>>>
>>>>>> The RPC is accepted, but the server reports
>>>>>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
>>>>>>
>>>>>> The client responds by attempting state recovery. The
>>>>>> first operation it tries is another RENEW. Since this is
>>>>>> a lease management operation, the client looks in UID 0's
>>>>>> credential cache again and finds the recently established
>>>>>> context 3. It tries the RENEW operation using GSS context
>>>>>> 3 with GSS service "integrity."
>>>>>>
>>>>>> The server rejects the RENEW RPC with AUTH_FAILED, and
>>>>>> the client reports that "check lease failed" and
>>>>>> terminates state recovery.
>>>>>>
>>>>>> The client re-drives the WRITE operation with the stale
>>>>>> stateid with predictable results. The client again tries
>>>>>> to recover state by sending a RENEW, and still uses the
>>>>>> same GSS context 3 with service "integrity" and gets the
>>>>>> same result. A (perhaps slow-motion) STALE_STATEID loop
>>>>>> ensues, and the client mount point is deadlocked.
>>>>>>
>>>>>> Your analysis was that because the reproducer is run as
>>>>>> root, both the reproducer's I/O operations, and lease
>>>>>> management operations, attempt to use the same GSS context
>>>>>> in UID 0's credential cache, but each uses different GSS
>>>>>> services.
>>>>>
>>>>> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
>>>>> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
>>>>> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
>>>>>
>>>>>
>>>>>> The key issue seems to be why, when the mount
>>>>>> is first established, the client is correctly able to
>>>>>> establish two separate GSS contexts for UID 0; but after
>>>>>> a server reboot, the client attempts to use the same GSS
>>>>>> context with two different GSS services.
>>>>>
>>>>> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.
>>>>
>>>> I agree with Andy. It must be a tight race.
>>>
>>> In one capture I see something like this after
>>> the server restarts:
>>>
>>> SYN
>>> SYN, ACK
>>> ACK
>>> C WRITE
>>> C SEQUENCE
>>> R WRITE -> CTX_PROBLEM
>>> R SEQUENCE -> CTX_PROBLEM
>>> C NULL (KRB5_AP_REQ)
>>> R NULL (KRB5_AP_REP)
>>> C WRITE
>>> C SEQUENCE
>>> R WRITE -> NFS4ERR_STALE_STATEID
>>> R SEQUENCE -> AUTH_FAILED
>>>
>>> Andy's theory neatly explains this behavior.
>>>
>>>
>>>> I have tried to reproduce
>>>> your scenario and in my tests of rebooting the server all recover
>>>> correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
>>>> the new context is established and then RENEW using integrity service
>>>> is retried with the new context which gets ERR_STALE_CLIENTID which
>>>> then client recovers from. If it's an operation (I have a GETATTR)
>>>> that gets AUTH_ERR, then it gets new context and is retried using none
>>>> service. Then RENEW gets its own AUTH_ERR as it uses a different
>>>> context, a new context is gotten, RENEW is retried over integrity and
>>>> gets ERR_STALE_CLIENTID which it recovers from.
>>>
>>> If one operation is allowed to complete, then
>>> the other will always recognize that another
>>> fresh GSS context is needed. If two are sent
>>> at the same time, they race and one always
>>> fails.
>>>
>>> Helen's test includes a second idle mount point
>>> (sec=krb5i) and maybe that is needed to trigger
>>> the race?
>>>
>>
>> Chuck, any chance to get "rpcdebug -m rpc auth" output during the
>> failure (gssd optionally) (i realize that it might alter the timings
>> and not hit the issue but worth a shot)?
>
> I'm sure that's fine. An internal tester hit this,
> not a customer, so I will ask.
>
> I agree, though, that timing might be a problem:
> these systems all have real serial consoles via
> iLOM, so /v/l/m traffic does bring everything to
> a standstill.
>
> Meanwhile, what's you're opinion about AUTH_FAILED?
> Should the server return RPCSEC_GSS_CTXPROBLEM
> in this case instead? If it did, do you think
> the Linux client would recover by creating a
> replacement GSS context?

Ah, yes, I equated AUTH_FAILED And AUTH_ERROR in my mind. If client
receives the reason as AUTH_FAILED as oppose to CTXPROBLEM it will
fail with EIO error and will not try to create a new GSS context. So
yes, I believe it would help if the server returns any of the
following errors:
                case RPC_AUTH_REJECTEDCRED:
                case RPC_AUTH_REJECTEDVERF:
                case RPCSEC_GSS_CREDPROBLEM:
                case RPCSEC_GSS_CTXPROBLEM:

then the client will recreate the context.


>
>
>
>>>>> —>Andy
>>>>>>
>>>>>> One solution is to introduce a quick check before a
>>>>>> context is used to see if the GSS service bound to it
>>>>>> matches the GSS service that the caller intends to use.
>>>>>> I'm not sure how that can be done without exposing a window
>>>>>> where another caller requests the use of a GSS context and
>>>>>> grabs the fresh one, before it can be used by our first
>>>>>> caller and bound to its desired GSS service.
>>>>>>
>>>>>> Other solutions might be to somehow isolate the credential
>>>>>> cache used for lease management operations, or to split
>>>>>> credential caches by GSS service.
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Chuck Lever
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>> --
>>> Chuck Lever
>>>
>>>
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> --
> Chuck Lever
>
>
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-07-21 19:54           ` Olga Kornievskaia
@ 2016-07-21 20:46             ` Olga Kornievskaia
  2016-07-21 21:32               ` Chuck Lever
  0 siblings, 1 reply; 25+ messages in thread
From: Olga Kornievskaia @ 2016-07-21 20:46 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Adamson, Andy, Linux NFS Mailing List

On Thu, Jul 21, 2016 at 3:54 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> On Thu, Jul 21, 2016 at 1:56 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>
>>> On Jul 21, 2016, at 6:04 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>
>>> On Thu, Jul 21, 2016 at 2:55 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>
>>>>> On Jul 20, 2016, at 6:56 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>
>>>>> On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
>>>>> <William.Adamson@netapp.com> wrote:
>>>>>>
>>>>>>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>
>>>>>>> Hi Andy-
>>>>>>>
>>>>>>> Thanks for taking the time to discuss this with me. I've
>>>>>>> copied linux-nfs to make this e-mail also an upstream bug
>>>>>>> report.
>>>>>>>
>>>>>>> As we saw in the network capture, recovery of GSS contexts
>>>>>>> after a server reboot fails in certain cases with NFSv4.0
>>>>>>> and NFSv4.1 mount points.
>>>>>>>
>>>>>>> The reproducer is a simple program that generates one NFS
>>>>>>> WRITE periodically, run while the server repeatedly reboots
>>>>>>> (or one cluster head fails over to the other and back). The
>>>>>>> goal of the reproducer is to identify problems with state
>>>>>>> recovery without a lot of other I/O going on to clutter up
>>>>>>> the network capture.
>>>>>>>
>>>>>>> In the failing case, sec=krb5 is specified on the mount
>>>>>>> point, and the reproducer is run as root. We've found this
>>>>>>> combination fails with both NFSv4.0 and NFSv4.1.
>>>>>>>
>>>>>>> At mount time, the client establishes a GSS context for
>>>>>>> lease management operations, which is bound to the client's
>>>>>>> NFS service principal and uses GSS service "integrity."
>>>>>>> Call this GSS context 1.
>>>>>>>
>>>>>>> When the reproducer starts, a second GSS context is
>>>>>>> established for NFS operations associated with that user.
>>>>>>> Since the reproducer is running as root, this context is
>>>>>>> also bound to the client's NFS service principal, but it
>>>>>>> uses the GSS service "none" (reflecting the explicit
>>>>>>> request for "sec=krb5"). Call this GSS context 2.
>>>>>>>
>>>>>>> After the server reboots, the client re-establishes a TCP
>>>>>>> connection with the server, and performs a RENEW
>>>>>>> operation using context 1. Thanks to the server reboot,
>>>>>>> contexts 1 and 2 are now stale. The server thus rejects
>>>>>>> the RPC with RPCSEC_GSS_CTXPROBLEM.
>>>>>>>
>>>>>>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
>>>>>>> NULL operation. Call this GSS context 3.
>>>>>>>
>>>>>>> Interestingly, the client does not resend the RENEW
>>>>>>> operation at this point (if it did, we wouldn't see this
>>>>>>> problem at all).
>>>>>>>
>>>>>>> The client then attempts to resume the reproducer workload.
>>>>>>> It sends an NFSv4 WRITE operation, using the first available
>>>>>>> GSS context in UID 0's credential cache, which is context 3,
>>>>>>> already bound to the client's NFS service principal. But GSS
>>>>>>> service "none" is used for this operation, since it is on
>>>>>>> behalf of the mount where sec=krb5 was specified.
>>>>>>>
>>>>>>> The RPC is accepted, but the server reports
>>>>>>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
>>>>>>>
>>>>>>> The client responds by attempting state recovery. The
>>>>>>> first operation it tries is another RENEW. Since this is
>>>>>>> a lease management operation, the client looks in UID 0's
>>>>>>> credential cache again and finds the recently established
>>>>>>> context 3. It tries the RENEW operation using GSS context
>>>>>>> 3 with GSS service "integrity."
>>>>>>>
>>>>>>> The server rejects the RENEW RPC with AUTH_FAILED, and
>>>>>>> the client reports that "check lease failed" and
>>>>>>> terminates state recovery.
>>>>>>>
>>>>>>> The client re-drives the WRITE operation with the stale
>>>>>>> stateid with predictable results. The client again tries
>>>>>>> to recover state by sending a RENEW, and still uses the
>>>>>>> same GSS context 3 with service "integrity" and gets the
>>>>>>> same result. A (perhaps slow-motion) STALE_STATEID loop
>>>>>>> ensues, and the client mount point is deadlocked.
>>>>>>>
>>>>>>> Your analysis was that because the reproducer is run as
>>>>>>> root, both the reproducer's I/O operations, and lease
>>>>>>> management operations, attempt to use the same GSS context
>>>>>>> in UID 0's credential cache, but each uses different GSS
>>>>>>> services.
>>>>>>
>>>>>> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
>>>>>> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
>>>>>> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
>>>>>>
>>>>>>
>>>>>>> The key issue seems to be why, when the mount
>>>>>>> is first established, the client is correctly able to
>>>>>>> establish two separate GSS contexts for UID 0; but after
>>>>>>> a server reboot, the client attempts to use the same GSS
>>>>>>> context with two different GSS services.
>>>>>>
>>>>>> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.
>>>>>
>>>>> I agree with Andy. It must be a tight race.
>>>>
>>>> In one capture I see something like this after
>>>> the server restarts:
>>>>
>>>> SYN
>>>> SYN, ACK
>>>> ACK
>>>> C WRITE
>>>> C SEQUENCE
>>>> R WRITE -> CTX_PROBLEM
>>>> R SEQUENCE -> CTX_PROBLEM
>>>> C NULL (KRB5_AP_REQ)
>>>> R NULL (KRB5_AP_REP)
>>>> C WRITE
>>>> C SEQUENCE
>>>> R WRITE -> NFS4ERR_STALE_STATEID
>>>> R SEQUENCE -> AUTH_FAILED
>>>>
>>>> Andy's theory neatly explains this behavior.
>>>>
>>>>
>>>>> I have tried to reproduce
>>>>> your scenario and in my tests of rebooting the server all recover
>>>>> correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
>>>>> the new context is established and then RENEW using integrity service
>>>>> is retried with the new context which gets ERR_STALE_CLIENTID which
>>>>> then client recovers from. If it's an operation (I have a GETATTR)
>>>>> that gets AUTH_ERR, then it gets new context and is retried using none
>>>>> service. Then RENEW gets its own AUTH_ERR as it uses a different
>>>>> context, a new context is gotten, RENEW is retried over integrity and
>>>>> gets ERR_STALE_CLIENTID which it recovers from.
>>>>
>>>> If one operation is allowed to complete, then
>>>> the other will always recognize that another
>>>> fresh GSS context is needed. If two are sent
>>>> at the same time, they race and one always
>>>> fails.
>>>>
>>>> Helen's test includes a second idle mount point
>>>> (sec=krb5i) and maybe that is needed to trigger
>>>> the race?
>>>>
>>>
>>> Chuck, any chance to get "rpcdebug -m rpc auth" output during the
>>> failure (gssd optionally) (i realize that it might alter the timings
>>> and not hit the issue but worth a shot)?
>>
>> I'm sure that's fine. An internal tester hit this,
>> not a customer, so I will ask.
>>
>> I agree, though, that timing might be a problem:
>> these systems all have real serial consoles via
>> iLOM, so /v/l/m traffic does bring everything to
>> a standstill.
>>
>> Meanwhile, what's you're opinion about AUTH_FAILED?
>> Should the server return RPCSEC_GSS_CTXPROBLEM
>> in this case instead? If it did, do you think
>> the Linux client would recover by creating a
>> replacement GSS context?
>
> Ah, yes, I equated AUTH_FAILED And AUTH_ERROR in my mind. If client
> receives the reason as AUTH_FAILED as oppose to CTXPROBLEM it will
> fail with EIO error and will not try to create a new GSS context. So
> yes, I believe it would help if the server returns any of the
> following errors:
>                 case RPC_AUTH_REJECTEDCRED:
>                 case RPC_AUTH_REJECTEDVERF:
>                 case RPCSEC_GSS_CREDPROBLEM:
>                 case RPCSEC_GSS_CTXPROBLEM:
>
> then the client will recreate the context.
>

Also in my testing, I can see that credential cache is per gss flavor.
Just to check, what kernel version is this problem encountered on (I
know you said upstream) but I just want to double check so that I can
look at the correct source code.

Thanks.

>
>>
>>
>>
>>>>>> —>Andy
>>>>>>>
>>>>>>> One solution is to introduce a quick check before a
>>>>>>> context is used to see if the GSS service bound to it
>>>>>>> matches the GSS service that the caller intends to use.
>>>>>>> I'm not sure how that can be done without exposing a window
>>>>>>> where another caller requests the use of a GSS context and
>>>>>>> grabs the fresh one, before it can be used by our first
>>>>>>> caller and bound to its desired GSS service.
>>>>>>>
>>>>>>> Other solutions might be to somehow isolate the credential
>>>>>>> cache used for lease management operations, or to split
>>>>>>> credential caches by GSS service.
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Chuck Lever
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>>> --
>>>> Chuck Lever
>>>>
>>>>
>>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>> --
>> Chuck Lever
>>
>>
>>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-07-21 20:46             ` Olga Kornievskaia
@ 2016-07-21 21:32               ` Chuck Lever
  2016-07-25 18:18                 ` Olga Kornievskaia
  0 siblings, 1 reply; 25+ messages in thread
From: Chuck Lever @ 2016-07-21 21:32 UTC (permalink / raw)
  To: Olga Kornievskaia; +Cc: Adamson, Andy, Linux NFS Mailing List



> On Jul 21, 2016, at 10:46 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> 
>> On Thu, Jul 21, 2016 at 3:54 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>> On Thu, Jul 21, 2016 at 1:56 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>> 
>>>> On Jul 21, 2016, at 6:04 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>> 
>>>> On Thu, Jul 21, 2016 at 2:55 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>> 
>>>>>> On Jul 20, 2016, at 6:56 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>> 
>>>>>> On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
>>>>>> <William.Adamson@netapp.com> wrote:
>>>>>>> 
>>>>>>>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>> 
>>>>>>>> Hi Andy-
>>>>>>>> 
>>>>>>>> Thanks for taking the time to discuss this with me. I've
>>>>>>>> copied linux-nfs to make this e-mail also an upstream bug
>>>>>>>> report.
>>>>>>>> 
>>>>>>>> As we saw in the network capture, recovery of GSS contexts
>>>>>>>> after a server reboot fails in certain cases with NFSv4.0
>>>>>>>> and NFSv4.1 mount points.
>>>>>>>> 
>>>>>>>> The reproducer is a simple program that generates one NFS
>>>>>>>> WRITE periodically, run while the server repeatedly reboots
>>>>>>>> (or one cluster head fails over to the other and back). The
>>>>>>>> goal of the reproducer is to identify problems with state
>>>>>>>> recovery without a lot of other I/O going on to clutter up
>>>>>>>> the network capture.
>>>>>>>> 
>>>>>>>> In the failing case, sec=krb5 is specified on the mount
>>>>>>>> point, and the reproducer is run as root. We've found this
>>>>>>>> combination fails with both NFSv4.0 and NFSv4.1.
>>>>>>>> 
>>>>>>>> At mount time, the client establishes a GSS context for
>>>>>>>> lease management operations, which is bound to the client's
>>>>>>>> NFS service principal and uses GSS service "integrity."
>>>>>>>> Call this GSS context 1.
>>>>>>>> 
>>>>>>>> When the reproducer starts, a second GSS context is
>>>>>>>> established for NFS operations associated with that user.
>>>>>>>> Since the reproducer is running as root, this context is
>>>>>>>> also bound to the client's NFS service principal, but it
>>>>>>>> uses the GSS service "none" (reflecting the explicit
>>>>>>>> request for "sec=krb5"). Call this GSS context 2.
>>>>>>>> 
>>>>>>>> After the server reboots, the client re-establishes a TCP
>>>>>>>> connection with the server, and performs a RENEW
>>>>>>>> operation using context 1. Thanks to the server reboot,
>>>>>>>> contexts 1 and 2 are now stale. The server thus rejects
>>>>>>>> the RPC with RPCSEC_GSS_CTXPROBLEM.
>>>>>>>> 
>>>>>>>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
>>>>>>>> NULL operation. Call this GSS context 3.
>>>>>>>> 
>>>>>>>> Interestingly, the client does not resend the RENEW
>>>>>>>> operation at this point (if it did, we wouldn't see this
>>>>>>>> problem at all).
>>>>>>>> 
>>>>>>>> The client then attempts to resume the reproducer workload.
>>>>>>>> It sends an NFSv4 WRITE operation, using the first available
>>>>>>>> GSS context in UID 0's credential cache, which is context 3,
>>>>>>>> already bound to the client's NFS service principal. But GSS
>>>>>>>> service "none" is used for this operation, since it is on
>>>>>>>> behalf of the mount where sec=krb5 was specified.
>>>>>>>> 
>>>>>>>> The RPC is accepted, but the server reports
>>>>>>>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
>>>>>>>> 
>>>>>>>> The client responds by attempting state recovery. The
>>>>>>>> first operation it tries is another RENEW. Since this is
>>>>>>>> a lease management operation, the client looks in UID 0's
>>>>>>>> credential cache again and finds the recently established
>>>>>>>> context 3. It tries the RENEW operation using GSS context
>>>>>>>> 3 with GSS service "integrity."
>>>>>>>> 
>>>>>>>> The server rejects the RENEW RPC with AUTH_FAILED, and
>>>>>>>> the client reports that "check lease failed" and
>>>>>>>> terminates state recovery.
>>>>>>>> 
>>>>>>>> The client re-drives the WRITE operation with the stale
>>>>>>>> stateid with predictable results. The client again tries
>>>>>>>> to recover state by sending a RENEW, and still uses the
>>>>>>>> same GSS context 3 with service "integrity" and gets the
>>>>>>>> same result. A (perhaps slow-motion) STALE_STATEID loop
>>>>>>>> ensues, and the client mount point is deadlocked.
>>>>>>>> 
>>>>>>>> Your analysis was that because the reproducer is run as
>>>>>>>> root, both the reproducer's I/O operations, and lease
>>>>>>>> management operations, attempt to use the same GSS context
>>>>>>>> in UID 0's credential cache, but each uses different GSS
>>>>>>>> services.
>>>>>>> 
>>>>>>> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
>>>>>>> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
>>>>>>> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
>>>>>>> 
>>>>>>> 
>>>>>>>> The key issue seems to be why, when the mount
>>>>>>>> is first established, the client is correctly able to
>>>>>>>> establish two separate GSS contexts for UID 0; but after
>>>>>>>> a server reboot, the client attempts to use the same GSS
>>>>>>>> context with two different GSS services.
>>>>>>> 
>>>>>>> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.
>>>>>> 
>>>>>> I agree with Andy. It must be a tight race.
>>>>> 
>>>>> In one capture I see something like this after
>>>>> the server restarts:
>>>>> 
>>>>> SYN
>>>>> SYN, ACK
>>>>> ACK
>>>>> C WRITE
>>>>> C SEQUENCE
>>>>> R WRITE -> CTX_PROBLEM
>>>>> R SEQUENCE -> CTX_PROBLEM
>>>>> C NULL (KRB5_AP_REQ)
>>>>> R NULL (KRB5_AP_REP)
>>>>> C WRITE
>>>>> C SEQUENCE
>>>>> R WRITE -> NFS4ERR_STALE_STATEID
>>>>> R SEQUENCE -> AUTH_FAILED
>>>>> 
>>>>> Andy's theory neatly explains this behavior.
>>>>> 
>>>>> 
>>>>>> I have tried to reproduce
>>>>>> your scenario and in my tests of rebooting the server all recover
>>>>>> correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
>>>>>> the new context is established and then RENEW using integrity service
>>>>>> is retried with the new context which gets ERR_STALE_CLIENTID which
>>>>>> then client recovers from. If it's an operation (I have a GETATTR)
>>>>>> that gets AUTH_ERR, then it gets new context and is retried using none
>>>>>> service. Then RENEW gets its own AUTH_ERR as it uses a different
>>>>>> context, a new context is gotten, RENEW is retried over integrity and
>>>>>> gets ERR_STALE_CLIENTID which it recovers from.
>>>>> 
>>>>> If one operation is allowed to complete, then
>>>>> the other will always recognize that another
>>>>> fresh GSS context is needed. If two are sent
>>>>> at the same time, they race and one always
>>>>> fails.
>>>>> 
>>>>> Helen's test includes a second idle mount point
>>>>> (sec=krb5i) and maybe that is needed to trigger
>>>>> the race?
>>>> 
>>>> Chuck, any chance to get "rpcdebug -m rpc auth" output during the
>>>> failure (gssd optionally) (i realize that it might alter the timings
>>>> and not hit the issue but worth a shot)?
>>> 
>>> I'm sure that's fine. An internal tester hit this,
>>> not a customer, so I will ask.
>>> 
>>> I agree, though, that timing might be a problem:
>>> these systems all have real serial consoles via
>>> iLOM, so /v/l/m traffic does bring everything to
>>> a standstill.
>>> 
>>> Meanwhile, what's you're opinion about AUTH_FAILED?
>>> Should the server return RPCSEC_GSS_CTXPROBLEM
>>> in this case instead? If it did, do you think
>>> the Linux client would recover by creating a
>>> replacement GSS context?
>> 
>> Ah, yes, I equated AUTH_FAILED And AUTH_ERROR in my mind. If client
>> receives the reason as AUTH_FAILED as oppose to CTXPROBLEM it will
>> fail with EIO error and will not try to create a new GSS context. So
>> yes, I believe it would help if the server returns any of the
>> following errors:
>>                case RPC_AUTH_REJECTEDCRED:
>>                case RPC_AUTH_REJECTEDVERF:
>>                case RPCSEC_GSS_CREDPROBLEM:
>>                case RPCSEC_GSS_CTXPROBLEM:
>> 
>> then the client will recreate the context.
> 
> Also in my testing, I can see that credential cache is per gss flavor.
> Just to check, what kernel version is this problem encountered on (I
> know you said upstream) but I just want to double check so that I can
> look at the correct source code.

v4.1.12 (stable) I think.


> 
> Thanks.
> 
>> 
>>> 
>>> 
>>> 
>>>>>>> —>Andy
>>>>>>>> 
>>>>>>>> One solution is to introduce a quick check before a
>>>>>>>> context is used to see if the GSS service bound to it
>>>>>>>> matches the GSS service that the caller intends to use.
>>>>>>>> I'm not sure how that can be done without exposing a window
>>>>>>>> where another caller requests the use of a GSS context and
>>>>>>>> grabs the fresh one, before it can be used by our first
>>>>>>>> caller and bound to its desired GSS service.
>>>>>>>> 
>>>>>>>> Other solutions might be to somehow isolate the credential
>>>>>>>> cache used for lease management operations, or to split
>>>>>>>> credential caches by GSS service.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Chuck Lever
>>>>>> --
>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>> 
>>>>> --
>>>>> Chuck Lever
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>> 
>>> --
>>> Chuck Lever
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-07-21 21:32               ` Chuck Lever
@ 2016-07-25 18:18                 ` Olga Kornievskaia
  2016-07-29 16:27                   ` Olga Kornievskaia
  0 siblings, 1 reply; 25+ messages in thread
From: Olga Kornievskaia @ 2016-07-25 18:18 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Adamson, Andy, Linux NFS Mailing List

On Thu, Jul 21, 2016 at 5:32 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>
>
>> On Jul 21, 2016, at 10:46 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>
>>> On Thu, Jul 21, 2016 at 3:54 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>> On Thu, Jul 21, 2016 at 1:56 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>
>>>>> On Jul 21, 2016, at 6:04 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>
>>>>> On Thu, Jul 21, 2016 at 2:55 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>
>>>>>>> On Jul 20, 2016, at 6:56 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>>
>>>>>>> On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
>>>>>>> <William.Adamson@netapp.com> wrote:
>>>>>>>>
>>>>>>>>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>>>
>>>>>>>>> Hi Andy-
>>>>>>>>>
>>>>>>>>> Thanks for taking the time to discuss this with me. I've
>>>>>>>>> copied linux-nfs to make this e-mail also an upstream bug
>>>>>>>>> report.
>>>>>>>>>
>>>>>>>>> As we saw in the network capture, recovery of GSS contexts
>>>>>>>>> after a server reboot fails in certain cases with NFSv4.0
>>>>>>>>> and NFSv4.1 mount points.
>>>>>>>>>
>>>>>>>>> The reproducer is a simple program that generates one NFS
>>>>>>>>> WRITE periodically, run while the server repeatedly reboots
>>>>>>>>> (or one cluster head fails over to the other and back). The
>>>>>>>>> goal of the reproducer is to identify problems with state
>>>>>>>>> recovery without a lot of other I/O going on to clutter up
>>>>>>>>> the network capture.
>>>>>>>>>
>>>>>>>>> In the failing case, sec=krb5 is specified on the mount
>>>>>>>>> point, and the reproducer is run as root. We've found this
>>>>>>>>> combination fails with both NFSv4.0 and NFSv4.1.
>>>>>>>>>
>>>>>>>>> At mount time, the client establishes a GSS context for
>>>>>>>>> lease management operations, which is bound to the client's
>>>>>>>>> NFS service principal and uses GSS service "integrity."
>>>>>>>>> Call this GSS context 1.
>>>>>>>>>
>>>>>>>>> When the reproducer starts, a second GSS context is
>>>>>>>>> established for NFS operations associated with that user.
>>>>>>>>> Since the reproducer is running as root, this context is
>>>>>>>>> also bound to the client's NFS service principal, but it
>>>>>>>>> uses the GSS service "none" (reflecting the explicit
>>>>>>>>> request for "sec=krb5"). Call this GSS context 2.
>>>>>>>>>
>>>>>>>>> After the server reboots, the client re-establishes a TCP
>>>>>>>>> connection with the server, and performs a RENEW
>>>>>>>>> operation using context 1. Thanks to the server reboot,
>>>>>>>>> contexts 1 and 2 are now stale. The server thus rejects
>>>>>>>>> the RPC with RPCSEC_GSS_CTXPROBLEM.
>>>>>>>>>
>>>>>>>>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
>>>>>>>>> NULL operation. Call this GSS context 3.
>>>>>>>>>
>>>>>>>>> Interestingly, the client does not resend the RENEW
>>>>>>>>> operation at this point (if it did, we wouldn't see this
>>>>>>>>> problem at all).
>>>>>>>>>
>>>>>>>>> The client then attempts to resume the reproducer workload.
>>>>>>>>> It sends an NFSv4 WRITE operation, using the first available
>>>>>>>>> GSS context in UID 0's credential cache, which is context 3,
>>>>>>>>> already bound to the client's NFS service principal. But GSS
>>>>>>>>> service "none" is used for this operation, since it is on
>>>>>>>>> behalf of the mount where sec=krb5 was specified.
>>>>>>>>>
>>>>>>>>> The RPC is accepted, but the server reports
>>>>>>>>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
>>>>>>>>>
>>>>>>>>> The client responds by attempting state recovery. The
>>>>>>>>> first operation it tries is another RENEW. Since this is
>>>>>>>>> a lease management operation, the client looks in UID 0's
>>>>>>>>> credential cache again and finds the recently established
>>>>>>>>> context 3. It tries the RENEW operation using GSS context
>>>>>>>>> 3 with GSS service "integrity."
>>>>>>>>>
>>>>>>>>> The server rejects the RENEW RPC with AUTH_FAILED, and
>>>>>>>>> the client reports that "check lease failed" and
>>>>>>>>> terminates state recovery.
>>>>>>>>>
>>>>>>>>> The client re-drives the WRITE operation with the stale
>>>>>>>>> stateid with predictable results. The client again tries
>>>>>>>>> to recover state by sending a RENEW, and still uses the
>>>>>>>>> same GSS context 3 with service "integrity" and gets the
>>>>>>>>> same result. A (perhaps slow-motion) STALE_STATEID loop
>>>>>>>>> ensues, and the client mount point is deadlocked.
>>>>>>>>>
>>>>>>>>> Your analysis was that because the reproducer is run as
>>>>>>>>> root, both the reproducer's I/O operations, and lease
>>>>>>>>> management operations, attempt to use the same GSS context
>>>>>>>>> in UID 0's credential cache, but each uses different GSS
>>>>>>>>> services.
>>>>>>>>
>>>>>>>> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
>>>>>>>> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
>>>>>>>> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
>>>>>>>>
>>>>>>>>
>>>>>>>>> The key issue seems to be why, when the mount
>>>>>>>>> is first established, the client is correctly able to
>>>>>>>>> establish two separate GSS contexts for UID 0; but after
>>>>>>>>> a server reboot, the client attempts to use the same GSS
>>>>>>>>> context with two different GSS services.
>>>>>>>>
>>>>>>>> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.
>>>>>>>
>>>>>>> I agree with Andy. It must be a tight race.
>>>>>>
>>>>>> In one capture I see something like this after
>>>>>> the server restarts:
>>>>>>
>>>>>> SYN
>>>>>> SYN, ACK
>>>>>> ACK
>>>>>> C WRITE
>>>>>> C SEQUENCE
>>>>>> R WRITE -> CTX_PROBLEM
>>>>>> R SEQUENCE -> CTX_PROBLEM
>>>>>> C NULL (KRB5_AP_REQ)
>>>>>> R NULL (KRB5_AP_REP)
>>>>>> C WRITE
>>>>>> C SEQUENCE
>>>>>> R WRITE -> NFS4ERR_STALE_STATEID
>>>>>> R SEQUENCE -> AUTH_FAILED
>>>>>>
>>>>>> Andy's theory neatly explains this behavior.
>>>>>>
>>>>>>
>>>>>>> I have tried to reproduce
>>>>>>> your scenario and in my tests of rebooting the server all recover
>>>>>>> correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
>>>>>>> the new context is established and then RENEW using integrity service
>>>>>>> is retried with the new context which gets ERR_STALE_CLIENTID which
>>>>>>> then client recovers from. If it's an operation (I have a GETATTR)
>>>>>>> that gets AUTH_ERR, then it gets new context and is retried using none
>>>>>>> service. Then RENEW gets its own AUTH_ERR as it uses a different
>>>>>>> context, a new context is gotten, RENEW is retried over integrity and
>>>>>>> gets ERR_STALE_CLIENTID which it recovers from.
>>>>>>
>>>>>> If one operation is allowed to complete, then
>>>>>> the other will always recognize that another
>>>>>> fresh GSS context is needed. If two are sent
>>>>>> at the same time, they race and one always
>>>>>> fails.
>>>>>>
>>>>>> Helen's test includes a second idle mount point
>>>>>> (sec=krb5i) and maybe that is needed to trigger
>>>>>> the race?
>>>>>
>>>>> Chuck, any chance to get "rpcdebug -m rpc auth" output during the
>>>>> failure (gssd optionally) (i realize that it might alter the timings
>>>>> and not hit the issue but worth a shot)?
>>>>
>>>> I'm sure that's fine. An internal tester hit this,
>>>> not a customer, so I will ask.
>>>>
>>>> I agree, though, that timing might be a problem:
>>>> these systems all have real serial consoles via
>>>> iLOM, so /v/l/m traffic does bring everything to
>>>> a standstill.
>>>>
>>>> Meanwhile, what's you're opinion about AUTH_FAILED?
>>>> Should the server return RPCSEC_GSS_CTXPROBLEM
>>>> in this case instead? If it did, do you think
>>>> the Linux client would recover by creating a
>>>> replacement GSS context?
>>>
>>> Ah, yes, I equated AUTH_FAILED And AUTH_ERROR in my mind. If client
>>> receives the reason as AUTH_FAILED as oppose to CTXPROBLEM it will
>>> fail with EIO error and will not try to create a new GSS context. So
>>> yes, I believe it would help if the server returns any of the
>>> following errors:
>>>                case RPC_AUTH_REJECTEDCRED:
>>>                case RPC_AUTH_REJECTEDVERF:
>>>                case RPCSEC_GSS_CREDPROBLEM:
>>>                case RPCSEC_GSS_CTXPROBLEM:
>>>
>>> then the client will recreate the context.
>>
>> Also in my testing, I can see that credential cache is per gss flavor.
>> Just to check, what kernel version is this problem encountered on (I
>> know you said upstream) but I just want to double check so that I can
>> look at the correct source code.
>
> v4.1.12 (stable) I think.

Also, can you share the network trace?

>
>
>>
>> Thanks.
>>
>>>
>>>>
>>>>
>>>>
>>>>>>>> —>Andy
>>>>>>>>>
>>>>>>>>> One solution is to introduce a quick check before a
>>>>>>>>> context is used to see if the GSS service bound to it
>>>>>>>>> matches the GSS service that the caller intends to use.
>>>>>>>>> I'm not sure how that can be done without exposing a window
>>>>>>>>> where another caller requests the use of a GSS context and
>>>>>>>>> grabs the fresh one, before it can be used by our first
>>>>>>>>> caller and bound to its desired GSS service.
>>>>>>>>>
>>>>>>>>> Other solutions might be to somehow isolate the credential
>>>>>>>>> cache used for lease management operations, or to split
>>>>>>>>> credential caches by GSS service.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Chuck Lever
>>>>>>> --
>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>
>>>>>> --
>>>>>> Chuck Lever
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>>> --
>>>> Chuck Lever
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-07-25 18:18                 ` Olga Kornievskaia
@ 2016-07-29 16:27                   ` Olga Kornievskaia
  2016-07-29 16:38                     ` Chuck Lever
  0 siblings, 1 reply; 25+ messages in thread
From: Olga Kornievskaia @ 2016-07-29 16:27 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Adamson, Andy, Linux NFS Mailing List

On Mon, Jul 25, 2016 at 2:18 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> On Thu, Jul 21, 2016 at 5:32 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>
>>
>>> On Jul 21, 2016, at 10:46 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>
>>>> On Thu, Jul 21, 2016 at 3:54 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>> On Thu, Jul 21, 2016 at 1:56 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>
>>>>>> On Jul 21, 2016, at 6:04 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>
>>>>>> On Thu, Jul 21, 2016 at 2:55 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>
>>>>>>>> On Jul 20, 2016, at 6:56 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>>>
>>>>>>>> On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
>>>>>>>> <William.Adamson@netapp.com> wrote:
>>>>>>>>>
>>>>>>>>>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>>>>
>>>>>>>>>> Hi Andy-
>>>>>>>>>>
>>>>>>>>>> Thanks for taking the time to discuss this with me. I've
>>>>>>>>>> copied linux-nfs to make this e-mail also an upstream bug
>>>>>>>>>> report.
>>>>>>>>>>
>>>>>>>>>> As we saw in the network capture, recovery of GSS contexts
>>>>>>>>>> after a server reboot fails in certain cases with NFSv4.0
>>>>>>>>>> and NFSv4.1 mount points.
>>>>>>>>>>
>>>>>>>>>> The reproducer is a simple program that generates one NFS
>>>>>>>>>> WRITE periodically, run while the server repeatedly reboots
>>>>>>>>>> (or one cluster head fails over to the other and back). The
>>>>>>>>>> goal of the reproducer is to identify problems with state
>>>>>>>>>> recovery without a lot of other I/O going on to clutter up
>>>>>>>>>> the network capture.
>>>>>>>>>>
>>>>>>>>>> In the failing case, sec=krb5 is specified on the mount
>>>>>>>>>> point, and the reproducer is run as root. We've found this
>>>>>>>>>> combination fails with both NFSv4.0 and NFSv4.1.
>>>>>>>>>>
>>>>>>>>>> At mount time, the client establishes a GSS context for
>>>>>>>>>> lease management operations, which is bound to the client's
>>>>>>>>>> NFS service principal and uses GSS service "integrity."
>>>>>>>>>> Call this GSS context 1.
>>>>>>>>>>
>>>>>>>>>> When the reproducer starts, a second GSS context is
>>>>>>>>>> established for NFS operations associated with that user.
>>>>>>>>>> Since the reproducer is running as root, this context is
>>>>>>>>>> also bound to the client's NFS service principal, but it
>>>>>>>>>> uses the GSS service "none" (reflecting the explicit
>>>>>>>>>> request for "sec=krb5"). Call this GSS context 2.
>>>>>>>>>>
>>>>>>>>>> After the server reboots, the client re-establishes a TCP
>>>>>>>>>> connection with the server, and performs a RENEW
>>>>>>>>>> operation using context 1. Thanks to the server reboot,
>>>>>>>>>> contexts 1 and 2 are now stale. The server thus rejects
>>>>>>>>>> the RPC with RPCSEC_GSS_CTXPROBLEM.
>>>>>>>>>>
>>>>>>>>>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
>>>>>>>>>> NULL operation. Call this GSS context 3.
>>>>>>>>>>
>>>>>>>>>> Interestingly, the client does not resend the RENEW
>>>>>>>>>> operation at this point (if it did, we wouldn't see this
>>>>>>>>>> problem at all).
>>>>>>>>>>
>>>>>>>>>> The client then attempts to resume the reproducer workload.
>>>>>>>>>> It sends an NFSv4 WRITE operation, using the first available
>>>>>>>>>> GSS context in UID 0's credential cache, which is context 3,
>>>>>>>>>> already bound to the client's NFS service principal. But GSS
>>>>>>>>>> service "none" is used for this operation, since it is on
>>>>>>>>>> behalf of the mount where sec=krb5 was specified.
>>>>>>>>>>
>>>>>>>>>> The RPC is accepted, but the server reports
>>>>>>>>>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
>>>>>>>>>>
>>>>>>>>>> The client responds by attempting state recovery. The
>>>>>>>>>> first operation it tries is another RENEW. Since this is
>>>>>>>>>> a lease management operation, the client looks in UID 0's
>>>>>>>>>> credential cache again and finds the recently established
>>>>>>>>>> context 3. It tries the RENEW operation using GSS context
>>>>>>>>>> 3 with GSS service "integrity."
>>>>>>>>>>
>>>>>>>>>> The server rejects the RENEW RPC with AUTH_FAILED, and
>>>>>>>>>> the client reports that "check lease failed" and
>>>>>>>>>> terminates state recovery.
>>>>>>>>>>
>>>>>>>>>> The client re-drives the WRITE operation with the stale
>>>>>>>>>> stateid with predictable results. The client again tries
>>>>>>>>>> to recover state by sending a RENEW, and still uses the
>>>>>>>>>> same GSS context 3 with service "integrity" and gets the
>>>>>>>>>> same result. A (perhaps slow-motion) STALE_STATEID loop
>>>>>>>>>> ensues, and the client mount point is deadlocked.
>>>>>>>>>>
>>>>>>>>>> Your analysis was that because the reproducer is run as
>>>>>>>>>> root, both the reproducer's I/O operations, and lease
>>>>>>>>>> management operations, attempt to use the same GSS context
>>>>>>>>>> in UID 0's credential cache, but each uses different GSS
>>>>>>>>>> services.
>>>>>>>>>
>>>>>>>>> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
>>>>>>>>> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
>>>>>>>>> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> The key issue seems to be why, when the mount
>>>>>>>>>> is first established, the client is correctly able to
>>>>>>>>>> establish two separate GSS contexts for UID 0; but after
>>>>>>>>>> a server reboot, the client attempts to use the same GSS
>>>>>>>>>> context with two different GSS services.
>>>>>>>>>
>>>>>>>>> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.
>>>>>>>>
>>>>>>>> I agree with Andy. It must be a tight race.
>>>>>>>
>>>>>>> In one capture I see something like this after
>>>>>>> the server restarts:
>>>>>>>
>>>>>>> SYN
>>>>>>> SYN, ACK
>>>>>>> ACK
>>>>>>> C WRITE
>>>>>>> C SEQUENCE
>>>>>>> R WRITE -> CTX_PROBLEM
>>>>>>> R SEQUENCE -> CTX_PROBLEM
>>>>>>> C NULL (KRB5_AP_REQ)
>>>>>>> R NULL (KRB5_AP_REP)
>>>>>>> C WRITE
>>>>>>> C SEQUENCE
>>>>>>> R WRITE -> NFS4ERR_STALE_STATEID
>>>>>>> R SEQUENCE -> AUTH_FAILED
>>>>>>>
>>>>>>> Andy's theory neatly explains this behavior.
>>>>>>>
>>>>>>>
>>>>>>>> I have tried to reproduce
>>>>>>>> your scenario and in my tests of rebooting the server all recover
>>>>>>>> correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
>>>>>>>> the new context is established and then RENEW using integrity service
>>>>>>>> is retried with the new context which gets ERR_STALE_CLIENTID which
>>>>>>>> then client recovers from. If it's an operation (I have a GETATTR)
>>>>>>>> that gets AUTH_ERR, then it gets new context and is retried using none
>>>>>>>> service. Then RENEW gets its own AUTH_ERR as it uses a different
>>>>>>>> context, a new context is gotten, RENEW is retried over integrity and
>>>>>>>> gets ERR_STALE_CLIENTID which it recovers from.
>>>>>>>
>>>>>>> If one operation is allowed to complete, then
>>>>>>> the other will always recognize that another
>>>>>>> fresh GSS context is needed. If two are sent
>>>>>>> at the same time, they race and one always
>>>>>>> fails.
>>>>>>>
>>>>>>> Helen's test includes a second idle mount point
>>>>>>> (sec=krb5i) and maybe that is needed to trigger
>>>>>>> the race?
>>>>>>
>>>>>> Chuck, any chance to get "rpcdebug -m rpc auth" output during the
>>>>>> failure (gssd optionally) (i realize that it might alter the timings
>>>>>> and not hit the issue but worth a shot)?
>>>>>
>>>>> I'm sure that's fine. An internal tester hit this,
>>>>> not a customer, so I will ask.
>>>>>
>>>>> I agree, though, that timing might be a problem:
>>>>> these systems all have real serial consoles via
>>>>> iLOM, so /v/l/m traffic does bring everything to
>>>>> a standstill.
>>>>>
>>>>> Meanwhile, what's you're opinion about AUTH_FAILED?
>>>>> Should the server return RPCSEC_GSS_CTXPROBLEM
>>>>> in this case instead? If it did, do you think
>>>>> the Linux client would recover by creating a
>>>>> replacement GSS context?
>>>>
>>>> Ah, yes, I equated AUTH_FAILED And AUTH_ERROR in my mind. If client
>>>> receives the reason as AUTH_FAILED as oppose to CTXPROBLEM it will
>>>> fail with EIO error and will not try to create a new GSS context. So
>>>> yes, I believe it would help if the server returns any of the
>>>> following errors:
>>>>                case RPC_AUTH_REJECTEDCRED:
>>>>                case RPC_AUTH_REJECTEDVERF:
>>>>                case RPCSEC_GSS_CREDPROBLEM:
>>>>                case RPCSEC_GSS_CTXPROBLEM:
>>>>
>>>> then the client will recreate the context.
>>>
>>> Also in my testing, I can see that credential cache is per gss flavor.
>>> Just to check, what kernel version is this problem encountered on (I
>>> know you said upstream) but I just want to double check so that I can
>>> look at the correct source code.
>>
>> v4.1.12 (stable) I think.
>
> Also, can you share the network trace?

Hi Chuck,

I was finally able to reproduce the condition you were seeing (i.e.,
the use of the same context for different gss services).

I enabled rpcdebug rpc auth and I can see that the 2nd request ends up
finding a gss_upcall message because it's just matched by the uid.
There is even a comment in auth_gss/auth_gss.c in gss_add_msg() saying
that if there is upcall for an uid then it won't add another upcall.
So I think the decision is made right there to share the same context
no matter the gss service.

>
>>
>>
>>>
>>> Thanks.
>>>
>>>>
>>>>>
>>>>>
>>>>>
>>>>>>>>> —>Andy
>>>>>>>>>>
>>>>>>>>>> One solution is to introduce a quick check before a
>>>>>>>>>> context is used to see if the GSS service bound to it
>>>>>>>>>> matches the GSS service that the caller intends to use.
>>>>>>>>>> I'm not sure how that can be done without exposing a window
>>>>>>>>>> where another caller requests the use of a GSS context and
>>>>>>>>>> grabs the fresh one, before it can be used by our first
>>>>>>>>>> caller and bound to its desired GSS service.
>>>>>>>>>>
>>>>>>>>>> Other solutions might be to somehow isolate the credential
>>>>>>>>>> cache used for lease management operations, or to split
>>>>>>>>>> credential caches by GSS service.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Chuck Lever
>>>>>>>> --
>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>
>>>>>>> --
>>>>>>> Chuck Lever
>>>>>> --
>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>
>>>>> --
>>>>> Chuck Lever
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-07-29 16:27                   ` Olga Kornievskaia
@ 2016-07-29 16:38                     ` Chuck Lever
  2016-07-29 17:07                       ` Adamson, Andy
  2016-08-02 18:06                       ` J. Bruce Fields
  0 siblings, 2 replies; 25+ messages in thread
From: Chuck Lever @ 2016-07-29 16:38 UTC (permalink / raw)
  To: Olga Kornievskaia; +Cc: Adamson, Andy, Linux NFS Mailing List


> On Jul 29, 2016, at 12:27 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> 
> On Mon, Jul 25, 2016 at 2:18 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>> On Thu, Jul 21, 2016 at 5:32 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>> 
>>> 
>>>> On Jul 21, 2016, at 10:46 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>> 
>>>>> On Thu, Jul 21, 2016 at 3:54 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>> On Thu, Jul 21, 2016 at 1:56 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>> 
>>>>>>> On Jul 21, 2016, at 6:04 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>> 
>>>>>>> On Thu, Jul 21, 2016 at 2:55 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>> 
>>>>>>>>> On Jul 20, 2016, at 6:56 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>>>> 
>>>>>>>>> On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
>>>>>>>>> <William.Adamson@netapp.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>> Hi Andy-
>>>>>>>>>>> 
>>>>>>>>>>> Thanks for taking the time to discuss this with me. I've
>>>>>>>>>>> copied linux-nfs to make this e-mail also an upstream bug
>>>>>>>>>>> report.
>>>>>>>>>>> 
>>>>>>>>>>> As we saw in the network capture, recovery of GSS contexts
>>>>>>>>>>> after a server reboot fails in certain cases with NFSv4.0
>>>>>>>>>>> and NFSv4.1 mount points.
>>>>>>>>>>> 
>>>>>>>>>>> The reproducer is a simple program that generates one NFS
>>>>>>>>>>> WRITE periodically, run while the server repeatedly reboots
>>>>>>>>>>> (or one cluster head fails over to the other and back). The
>>>>>>>>>>> goal of the reproducer is to identify problems with state
>>>>>>>>>>> recovery without a lot of other I/O going on to clutter up
>>>>>>>>>>> the network capture.
>>>>>>>>>>> 
>>>>>>>>>>> In the failing case, sec=krb5 is specified on the mount
>>>>>>>>>>> point, and the reproducer is run as root. We've found this
>>>>>>>>>>> combination fails with both NFSv4.0 and NFSv4.1.
>>>>>>>>>>> 
>>>>>>>>>>> At mount time, the client establishes a GSS context for
>>>>>>>>>>> lease management operations, which is bound to the client's
>>>>>>>>>>> NFS service principal and uses GSS service "integrity."
>>>>>>>>>>> Call this GSS context 1.
>>>>>>>>>>> 
>>>>>>>>>>> When the reproducer starts, a second GSS context is
>>>>>>>>>>> established for NFS operations associated with that user.
>>>>>>>>>>> Since the reproducer is running as root, this context is
>>>>>>>>>>> also bound to the client's NFS service principal, but it
>>>>>>>>>>> uses the GSS service "none" (reflecting the explicit
>>>>>>>>>>> request for "sec=krb5"). Call this GSS context 2.
>>>>>>>>>>> 
>>>>>>>>>>> After the server reboots, the client re-establishes a TCP
>>>>>>>>>>> connection with the server, and performs a RENEW
>>>>>>>>>>> operation using context 1. Thanks to the server reboot,
>>>>>>>>>>> contexts 1 and 2 are now stale. The server thus rejects
>>>>>>>>>>> the RPC with RPCSEC_GSS_CTXPROBLEM.
>>>>>>>>>>> 
>>>>>>>>>>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
>>>>>>>>>>> NULL operation. Call this GSS context 3.
>>>>>>>>>>> 
>>>>>>>>>>> Interestingly, the client does not resend the RENEW
>>>>>>>>>>> operation at this point (if it did, we wouldn't see this
>>>>>>>>>>> problem at all).
>>>>>>>>>>> 
>>>>>>>>>>> The client then attempts to resume the reproducer workload.
>>>>>>>>>>> It sends an NFSv4 WRITE operation, using the first available
>>>>>>>>>>> GSS context in UID 0's credential cache, which is context 3,
>>>>>>>>>>> already bound to the client's NFS service principal. But GSS
>>>>>>>>>>> service "none" is used for this operation, since it is on
>>>>>>>>>>> behalf of the mount where sec=krb5 was specified.
>>>>>>>>>>> 
>>>>>>>>>>> The RPC is accepted, but the server reports
>>>>>>>>>>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
>>>>>>>>>>> 
>>>>>>>>>>> The client responds by attempting state recovery. The
>>>>>>>>>>> first operation it tries is another RENEW. Since this is
>>>>>>>>>>> a lease management operation, the client looks in UID 0's
>>>>>>>>>>> credential cache again and finds the recently established
>>>>>>>>>>> context 3. It tries the RENEW operation using GSS context
>>>>>>>>>>> 3 with GSS service "integrity."
>>>>>>>>>>> 
>>>>>>>>>>> The server rejects the RENEW RPC with AUTH_FAILED, and
>>>>>>>>>>> the client reports that "check lease failed" and
>>>>>>>>>>> terminates state recovery.
>>>>>>>>>>> 
>>>>>>>>>>> The client re-drives the WRITE operation with the stale
>>>>>>>>>>> stateid with predictable results. The client again tries
>>>>>>>>>>> to recover state by sending a RENEW, and still uses the
>>>>>>>>>>> same GSS context 3 with service "integrity" and gets the
>>>>>>>>>>> same result. A (perhaps slow-motion) STALE_STATEID loop
>>>>>>>>>>> ensues, and the client mount point is deadlocked.
>>>>>>>>>>> 
>>>>>>>>>>> Your analysis was that because the reproducer is run as
>>>>>>>>>>> root, both the reproducer's I/O operations, and lease
>>>>>>>>>>> management operations, attempt to use the same GSS context
>>>>>>>>>>> in UID 0's credential cache, but each uses different GSS
>>>>>>>>>>> services.
>>>>>>>>>> 
>>>>>>>>>> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
>>>>>>>>>> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
>>>>>>>>>> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>>> The key issue seems to be why, when the mount
>>>>>>>>>>> is first established, the client is correctly able to
>>>>>>>>>>> establish two separate GSS contexts for UID 0; but after
>>>>>>>>>>> a server reboot, the client attempts to use the same GSS
>>>>>>>>>>> context with two different GSS services.
>>>>>>>>>> 
>>>>>>>>>> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.
>>>>>>>>> 
>>>>>>>>> I agree with Andy. It must be a tight race.
>>>>>>>> 
>>>>>>>> In one capture I see something like this after
>>>>>>>> the server restarts:
>>>>>>>> 
>>>>>>>> SYN
>>>>>>>> SYN, ACK
>>>>>>>> ACK
>>>>>>>> C WRITE
>>>>>>>> C SEQUENCE
>>>>>>>> R WRITE -> CTX_PROBLEM
>>>>>>>> R SEQUENCE -> CTX_PROBLEM
>>>>>>>> C NULL (KRB5_AP_REQ)
>>>>>>>> R NULL (KRB5_AP_REP)
>>>>>>>> C WRITE
>>>>>>>> C SEQUENCE
>>>>>>>> R WRITE -> NFS4ERR_STALE_STATEID
>>>>>>>> R SEQUENCE -> AUTH_FAILED
>>>>>>>> 
>>>>>>>> Andy's theory neatly explains this behavior.
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> I have tried to reproduce
>>>>>>>>> your scenario and in my tests of rebooting the server all recover
>>>>>>>>> correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
>>>>>>>>> the new context is established and then RENEW using integrity service
>>>>>>>>> is retried with the new context which gets ERR_STALE_CLIENTID which
>>>>>>>>> then client recovers from. If it's an operation (I have a GETATTR)
>>>>>>>>> that gets AUTH_ERR, then it gets new context and is retried using none
>>>>>>>>> service. Then RENEW gets its own AUTH_ERR as it uses a different
>>>>>>>>> context, a new context is gotten, RENEW is retried over integrity and
>>>>>>>>> gets ERR_STALE_CLIENTID which it recovers from.
>>>>>>>> 
>>>>>>>> If one operation is allowed to complete, then
>>>>>>>> the other will always recognize that another
>>>>>>>> fresh GSS context is needed. If two are sent
>>>>>>>> at the same time, they race and one always
>>>>>>>> fails.
>>>>>>>> 
>>>>>>>> Helen's test includes a second idle mount point
>>>>>>>> (sec=krb5i) and maybe that is needed to trigger
>>>>>>>> the race?
>>>>>>> 
>>>>>>> Chuck, any chance to get "rpcdebug -m rpc auth" output during the
>>>>>>> failure (gssd optionally) (i realize that it might alter the timings
>>>>>>> and not hit the issue but worth a shot)?
>>>>>> 
>>>>>> I'm sure that's fine. An internal tester hit this,
>>>>>> not a customer, so I will ask.
>>>>>> 
>>>>>> I agree, though, that timing might be a problem:
>>>>>> these systems all have real serial consoles via
>>>>>> iLOM, so /v/l/m traffic does bring everything to
>>>>>> a standstill.
>>>>>> 
>>>>>> Meanwhile, what's you're opinion about AUTH_FAILED?
>>>>>> Should the server return RPCSEC_GSS_CTXPROBLEM
>>>>>> in this case instead? If it did, do you think
>>>>>> the Linux client would recover by creating a
>>>>>> replacement GSS context?
>>>>> 
>>>>> Ah, yes, I equated AUTH_FAILED And AUTH_ERROR in my mind. If client
>>>>> receives the reason as AUTH_FAILED as oppose to CTXPROBLEM it will
>>>>> fail with EIO error and will not try to create a new GSS context. So
>>>>> yes, I believe it would help if the server returns any of the
>>>>> following errors:
>>>>>               case RPC_AUTH_REJECTEDCRED:
>>>>>               case RPC_AUTH_REJECTEDVERF:
>>>>>               case RPCSEC_GSS_CREDPROBLEM:
>>>>>               case RPCSEC_GSS_CTXPROBLEM:
>>>>> 
>>>>> then the client will recreate the context.
>>>> 
>>>> Also in my testing, I can see that credential cache is per gss flavor.
>>>> Just to check, what kernel version is this problem encountered on (I
>>>> know you said upstream) but I just want to double check so that I can
>>>> look at the correct source code.
>>> 
>>> v4.1.12 (stable) I think.
>> 
>> Also, can you share the network trace?
> 
> Hi Chuck,
> 
> I was finally able to reproduce the condition you were seeing (i.e.,
> the use of the same context for different gss services).
> 
> I enabled rpcdebug rpc auth and I can see that the 2nd request ends up
> finding a gss_upcall message because it's just matched by the uid.
> There is even a comment in auth_gss/auth_gss.c in gss_add_msg() saying
> that if there is upcall for an uid then it won't add another upcall.
> So I think the decision is made right there to share the same context
> no matter the gss service.

If I understand correctly, that's just what Andy predicted.

That check needs to be changed to allow another upcall to be
queued if the UID matches but the GSS service does not.


--
Chuck Lever




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-07-29 16:38                     ` Chuck Lever
@ 2016-07-29 17:07                       ` Adamson, Andy
  2016-07-29 17:32                         ` Adamson, Andy
  2016-08-02 18:06                       ` J. Bruce Fields
  1 sibling, 1 reply; 25+ messages in thread
From: Adamson, Andy @ 2016-07-29 17:07 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Olga Kornievskaia, Adamson, Andy, Linux NFS Mailing List

DQo+IE9uIEp1bCAyOSwgMjAxNiwgYXQgMTI6MzggUE0sIENodWNrIExldmVyIDxjaHVjay5sZXZl
ckBvcmFjbGUuY29tPiB3cm90ZToNCj4gDQo+PiANCj4+IE9uIEp1bCAyOSwgMjAxNiwgYXQgMTI6
MjcgUE0sIE9sZ2EgS29ybmlldnNrYWlhIDxhZ2xvQHVtaWNoLmVkdT4gd3JvdGU6DQo+PiANCj4+
IE9uIE1vbiwgSnVsIDI1LCAyMDE2IGF0IDI6MTggUE0sIE9sZ2EgS29ybmlldnNrYWlhIDxhZ2xv
QHVtaWNoLmVkdT4gd3JvdGU6DQo+Pj4gT24gVGh1LCBKdWwgMjEsIDIwMTYgYXQgNTozMiBQTSwg
Q2h1Y2sgTGV2ZXIgPGNodWNrLmxldmVyQG9yYWNsZS5jb20+IHdyb3RlOg0KPj4+PiANCj4+Pj4g
DQo+Pj4+PiBPbiBKdWwgMjEsIDIwMTYsIGF0IDEwOjQ2IFBNLCBPbGdhIEtvcm5pZXZza2FpYSA8
YWdsb0B1bWljaC5lZHU+IHdyb3RlOg0KPj4+Pj4gDQo+Pj4+Pj4gT24gVGh1LCBKdWwgMjEsIDIw
MTYgYXQgMzo1NCBQTSwgT2xnYSBLb3JuaWV2c2thaWEgPGFnbG9AdW1pY2guZWR1PiB3cm90ZToN
Cj4+Pj4+Pj4gT24gVGh1LCBKdWwgMjEsIDIwMTYgYXQgMTo1NiBQTSwgQ2h1Y2sgTGV2ZXIgPGNo
dWNrLmxldmVyQG9yYWNsZS5jb20+IHdyb3RlOg0KPj4+Pj4+PiANCj4+Pj4+Pj4+IE9uIEp1bCAy
MSwgMjAxNiwgYXQgNjowNCBQTSwgT2xnYSBLb3JuaWV2c2thaWEgPGFnbG9AdW1pY2guZWR1PiB3
cm90ZToNCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gT24gVGh1LCBKdWwgMjEsIDIwMTYgYXQgMjo1NSBB
TSwgQ2h1Y2sgTGV2ZXIgPGNodWNrLmxldmVyQG9yYWNsZS5jb20+IHdyb3RlOg0KPj4+Pj4+Pj4+
IA0KPj4+Pj4+Pj4+PiBPbiBKdWwgMjAsIDIwMTYsIGF0IDY6NTYgUE0sIE9sZ2EgS29ybmlldnNr
YWlhIDxhZ2xvQHVtaWNoLmVkdT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBPbiBX
ZWQsIEp1bCAyMCwgMjAxNiBhdCA1OjE0IEFNLCBBZGFtc29uLCBBbmR5DQo+Pj4+Pj4+Pj4+IDxX
aWxsaWFtLkFkYW1zb25AbmV0YXBwLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+
Pj4+PiBPbiBKdWwgMTksIDIwMTYsIGF0IDEwOjUxIEFNLCBDaHVjayBMZXZlciA8Y2h1Y2subGV2
ZXJAb3JhY2xlLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4gSGkgQW5k
eS0NCj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+PiBUaGFua3MgZm9yIHRha2luZyB0aGUgdGlt
ZSB0byBkaXNjdXNzIHRoaXMgd2l0aCBtZS4gSSd2ZQ0KPj4+Pj4+Pj4+Pj4+IGNvcGllZCBsaW51
eC1uZnMgdG8gbWFrZSB0aGlzIGUtbWFpbCBhbHNvIGFuIHVwc3RyZWFtIGJ1Zw0KPj4+Pj4+Pj4+
Pj4+IHJlcG9ydC4NCj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+PiBBcyB3ZSBzYXcgaW4gdGhl
IG5ldHdvcmsgY2FwdHVyZSwgcmVjb3Zlcnkgb2YgR1NTIGNvbnRleHRzDQo+Pj4+Pj4+Pj4+Pj4g
YWZ0ZXIgYSBzZXJ2ZXIgcmVib290IGZhaWxzIGluIGNlcnRhaW4gY2FzZXMgd2l0aCBORlN2NC4w
DQo+Pj4+Pj4+Pj4+Pj4gYW5kIE5GU3Y0LjEgbW91bnQgcG9pbnRzLg0KPj4+Pj4+Pj4+Pj4+IA0K
Pj4+Pj4+Pj4+Pj4+IFRoZSByZXByb2R1Y2VyIGlzIGEgc2ltcGxlIHByb2dyYW0gdGhhdCBnZW5l
cmF0ZXMgb25lIE5GUw0KPj4+Pj4+Pj4+Pj4+IFdSSVRFIHBlcmlvZGljYWxseSwgcnVuIHdoaWxl
IHRoZSBzZXJ2ZXIgcmVwZWF0ZWRseSByZWJvb3RzDQo+Pj4+Pj4+Pj4+Pj4gKG9yIG9uZSBjbHVz
dGVyIGhlYWQgZmFpbHMgb3ZlciB0byB0aGUgb3RoZXIgYW5kIGJhY2spLiBUaGUNCj4+Pj4+Pj4+
Pj4+PiBnb2FsIG9mIHRoZSByZXByb2R1Y2VyIGlzIHRvIGlkZW50aWZ5IHByb2JsZW1zIHdpdGgg
c3RhdGUNCj4+Pj4+Pj4+Pj4+PiByZWNvdmVyeSB3aXRob3V0IGEgbG90IG9mIG90aGVyIEkvTyBn
b2luZyBvbiB0byBjbHV0dGVyIHVwDQo+Pj4+Pj4+Pj4+Pj4gdGhlIG5ldHdvcmsgY2FwdHVyZS4N
Cj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+PiBJbiB0aGUgZmFpbGluZyBjYXNlLCBzZWM9a3Ji
NSBpcyBzcGVjaWZpZWQgb24gdGhlIG1vdW50DQo+Pj4+Pj4+Pj4+Pj4gcG9pbnQsIGFuZCB0aGUg
cmVwcm9kdWNlciBpcyBydW4gYXMgcm9vdC4gV2UndmUgZm91bmQgdGhpcw0KPj4+Pj4+Pj4+Pj4+
IGNvbWJpbmF0aW9uIGZhaWxzIHdpdGggYm90aCBORlN2NC4wIGFuZCBORlN2NC4xLg0KPj4+Pj4+
Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+IEF0IG1vdW50IHRpbWUsIHRoZSBjbGllbnQgZXN0YWJsaXNo
ZXMgYSBHU1MgY29udGV4dCBmb3INCj4+Pj4+Pj4+Pj4+PiBsZWFzZSBtYW5hZ2VtZW50IG9wZXJh
dGlvbnMsIHdoaWNoIGlzIGJvdW5kIHRvIHRoZSBjbGllbnQncw0KPj4+Pj4+Pj4+Pj4+IE5GUyBz
ZXJ2aWNlIHByaW5jaXBhbCBhbmQgdXNlcyBHU1Mgc2VydmljZSAiaW50ZWdyaXR5LiINCj4+Pj4+
Pj4+Pj4+PiBDYWxsIHRoaXMgR1NTIGNvbnRleHQgMS4NCj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+
Pj4+PiBXaGVuIHRoZSByZXByb2R1Y2VyIHN0YXJ0cywgYSBzZWNvbmQgR1NTIGNvbnRleHQgaXMN
Cj4+Pj4+Pj4+Pj4+PiBlc3RhYmxpc2hlZCBmb3IgTkZTIG9wZXJhdGlvbnMgYXNzb2NpYXRlZCB3
aXRoIHRoYXQgdXNlci4NCj4+Pj4+Pj4+Pj4+PiBTaW5jZSB0aGUgcmVwcm9kdWNlciBpcyBydW5u
aW5nIGFzIHJvb3QsIHRoaXMgY29udGV4dCBpcw0KPj4+Pj4+Pj4+Pj4+IGFsc28gYm91bmQgdG8g
dGhlIGNsaWVudCdzIE5GUyBzZXJ2aWNlIHByaW5jaXBhbCwgYnV0IGl0DQo+Pj4+Pj4+Pj4+Pj4g
dXNlcyB0aGUgR1NTIHNlcnZpY2UgIm5vbmUiIChyZWZsZWN0aW5nIHRoZSBleHBsaWNpdA0KPj4+
Pj4+Pj4+Pj4+IHJlcXVlc3QgZm9yICJzZWM9a3JiNSIpLiBDYWxsIHRoaXMgR1NTIGNvbnRleHQg
Mi4NCj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+PiBBZnRlciB0aGUgc2VydmVyIHJlYm9vdHMs
IHRoZSBjbGllbnQgcmUtZXN0YWJsaXNoZXMgYSBUQ1ANCj4+Pj4+Pj4+Pj4+PiBjb25uZWN0aW9u
IHdpdGggdGhlIHNlcnZlciwgYW5kIHBlcmZvcm1zIGEgUkVORVcNCj4+Pj4+Pj4+Pj4+PiBvcGVy
YXRpb24gdXNpbmcgY29udGV4dCAxLiBUaGFua3MgdG8gdGhlIHNlcnZlciByZWJvb3QsDQo+Pj4+
Pj4+Pj4+Pj4gY29udGV4dHMgMSBhbmQgMiBhcmUgbm93IHN0YWxlLiBUaGUgc2VydmVyIHRodXMg
cmVqZWN0cw0KPj4+Pj4+Pj4+Pj4+IHRoZSBSUEMgd2l0aCBSUENTRUNfR1NTX0NUWFBST0JMRU0u
DQo+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4gVGhlIGNsaWVudCBwZXJmb3JtcyBhIEdTU19J
TklUX1NFQ19DT05URVhUIHZpYSBhbiBORlN2NA0KPj4+Pj4+Pj4+Pj4+IE5VTEwgb3BlcmF0aW9u
LiBDYWxsIHRoaXMgR1NTIGNvbnRleHQgMy4NCj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+PiBJ
bnRlcmVzdGluZ2x5LCB0aGUgY2xpZW50IGRvZXMgbm90IHJlc2VuZCB0aGUgUkVORVcNCj4+Pj4+
Pj4+Pj4+PiBvcGVyYXRpb24gYXQgdGhpcyBwb2ludCAoaWYgaXQgZGlkLCB3ZSB3b3VsZG4ndCBz
ZWUgdGhpcw0KPj4+Pj4+Pj4+Pj4+IHByb2JsZW0gYXQgYWxsKS4NCj4+Pj4+Pj4+Pj4+PiANCj4+
Pj4+Pj4+Pj4+PiBUaGUgY2xpZW50IHRoZW4gYXR0ZW1wdHMgdG8gcmVzdW1lIHRoZSByZXByb2R1
Y2VyIHdvcmtsb2FkLg0KPj4+Pj4+Pj4+Pj4+IEl0IHNlbmRzIGFuIE5GU3Y0IFdSSVRFIG9wZXJh
dGlvbiwgdXNpbmcgdGhlIGZpcnN0IGF2YWlsYWJsZQ0KPj4+Pj4+Pj4+Pj4+IEdTUyBjb250ZXh0
IGluIFVJRCAwJ3MgY3JlZGVudGlhbCBjYWNoZSwgd2hpY2ggaXMgY29udGV4dCAzLA0KPj4+Pj4+
Pj4+Pj4+IGFscmVhZHkgYm91bmQgdG8gdGhlIGNsaWVudCdzIE5GUyBzZXJ2aWNlIHByaW5jaXBh
bC4gQnV0IEdTUw0KPj4+Pj4+Pj4+Pj4+IHNlcnZpY2UgIm5vbmUiIGlzIHVzZWQgZm9yIHRoaXMg
b3BlcmF0aW9uLCBzaW5jZSBpdCBpcyBvbg0KPj4+Pj4+Pj4+Pj4+IGJlaGFsZiBvZiB0aGUgbW91
bnQgd2hlcmUgc2VjPWtyYjUgd2FzIHNwZWNpZmllZC4NCj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+
Pj4+PiBUaGUgUlBDIGlzIGFjY2VwdGVkLCBidXQgdGhlIHNlcnZlciByZXBvcnRzDQo+Pj4+Pj4+
Pj4+Pj4gTkZTNEVSUl9TVEFMRV9TVEFURUlELCBzaW5jZSBpdCBoYXMgcmVjZW50bHkgcmVib290
ZWQuDQo+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4gVGhlIGNsaWVudCByZXNwb25kcyBieSBh
dHRlbXB0aW5nIHN0YXRlIHJlY292ZXJ5LiBUaGUNCj4+Pj4+Pj4+Pj4+PiBmaXJzdCBvcGVyYXRp
b24gaXQgdHJpZXMgaXMgYW5vdGhlciBSRU5FVy4gU2luY2UgdGhpcyBpcw0KPj4+Pj4+Pj4+Pj4+
IGEgbGVhc2UgbWFuYWdlbWVudCBvcGVyYXRpb24sIHRoZSBjbGllbnQgbG9va3MgaW4gVUlEIDAn
cw0KPj4+Pj4+Pj4+Pj4+IGNyZWRlbnRpYWwgY2FjaGUgYWdhaW4gYW5kIGZpbmRzIHRoZSByZWNl
bnRseSBlc3RhYmxpc2hlZA0KPj4+Pj4+Pj4+Pj4+IGNvbnRleHQgMy4gSXQgdHJpZXMgdGhlIFJF
TkVXIG9wZXJhdGlvbiB1c2luZyBHU1MgY29udGV4dA0KPj4+Pj4+Pj4+Pj4+IDMgd2l0aCBHU1Mg
c2VydmljZSAiaW50ZWdyaXR5LiINCj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+PiBUaGUgc2Vy
dmVyIHJlamVjdHMgdGhlIFJFTkVXIFJQQyB3aXRoIEFVVEhfRkFJTEVELCBhbmQNCj4+Pj4+Pj4+
Pj4+PiB0aGUgY2xpZW50IHJlcG9ydHMgdGhhdCAiY2hlY2sgbGVhc2UgZmFpbGVkIiBhbmQNCj4+
Pj4+Pj4+Pj4+PiB0ZXJtaW5hdGVzIHN0YXRlIHJlY292ZXJ5Lg0KPj4+Pj4+Pj4+Pj4+IA0KPj4+
Pj4+Pj4+Pj4+IFRoZSBjbGllbnQgcmUtZHJpdmVzIHRoZSBXUklURSBvcGVyYXRpb24gd2l0aCB0
aGUgc3RhbGUNCj4+Pj4+Pj4+Pj4+PiBzdGF0ZWlkIHdpdGggcHJlZGljdGFibGUgcmVzdWx0cy4g
VGhlIGNsaWVudCBhZ2FpbiB0cmllcw0KPj4+Pj4+Pj4+Pj4+IHRvIHJlY292ZXIgc3RhdGUgYnkg
c2VuZGluZyBhIFJFTkVXLCBhbmQgc3RpbGwgdXNlcyB0aGUNCj4+Pj4+Pj4+Pj4+PiBzYW1lIEdT
UyBjb250ZXh0IDMgd2l0aCBzZXJ2aWNlICJpbnRlZ3JpdHkiIGFuZCBnZXRzIHRoZQ0KPj4+Pj4+
Pj4+Pj4+IHNhbWUgcmVzdWx0LiBBIChwZXJoYXBzIHNsb3ctbW90aW9uKSBTVEFMRV9TVEFURUlE
IGxvb3ANCj4+Pj4+Pj4+Pj4+PiBlbnN1ZXMsIGFuZCB0aGUgY2xpZW50IG1vdW50IHBvaW50IGlz
IGRlYWRsb2NrZWQuDQo+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4gWW91ciBhbmFseXNpcyB3
YXMgdGhhdCBiZWNhdXNlIHRoZSByZXByb2R1Y2VyIGlzIHJ1biBhcw0KPj4+Pj4+Pj4+Pj4+IHJv
b3QsIGJvdGggdGhlIHJlcHJvZHVjZXIncyBJL08gb3BlcmF0aW9ucywgYW5kIGxlYXNlDQo+Pj4+
Pj4+Pj4+Pj4gbWFuYWdlbWVudCBvcGVyYXRpb25zLCBhdHRlbXB0IHRvIHVzZSB0aGUgc2FtZSBH
U1MgY29udGV4dA0KPj4+Pj4+Pj4+Pj4+IGluIFVJRCAwJ3MgY3JlZGVudGlhbCBjYWNoZSwgYnV0
IGVhY2ggdXNlcyBkaWZmZXJlbnQgR1NTDQo+Pj4+Pj4+Pj4+Pj4gc2VydmljZXMuDQo+Pj4+Pj4+
Pj4+PiANCj4+Pj4+Pj4+Pj4+IEFzIFJGQzIyMDMgc3RhdGVzLCAiSW4gYSBjcmVhdGlvbiByZXF1
ZXN0LCB0aGUgc2VxX251bSBhbmQgc2VydmljZSBmaWVsZHMgYXJlIHVuZGVmaW5lZCBhbmQgYm90
aCBtdXN0IGJlIGlnbm9yZWQgYnkgdGhlIHNlcnZlcuKAnQ0KPj4+Pj4+Pj4+Pj4gU28gYSBjb250
ZXh0IGNyZWF0aW9uIHJlcXVlc3Qgd2hpbGUga2lja2VkIG9mZiBieSBhbiBvcGVyYXRpb24gd2l0
aCBhIHNlcnZpY2UgYXR0YWNoZWQgKGUuZy4gV1JJVEUgdXNlcyBycGNfZ3NzX3N2Y19ub25lIGFu
ZCBSRU5FVyB1c2VzIHJwY19nc3Nfc3ZjX2ludGVncml0eSksIGNhbiBiZSB1c2VkIGJ5IGVpdGhl
ciBzZXJ2aWNlIGxldmVsLg0KPj4+Pj4+Pj4+Pj4gQUZBSUNTIGEgc2luZ2xlIEdTUyBjb250ZXh0
IGNvdWxkIGluIHRoZW9yeSBiZSB1c2VkIGZvciBhbGwgc2VydmljZSBsZXZlbHMsIGJ1dCBpbiBw
cmFjdGljZSwgR1NTIGNvbnRleHRzIGFyZSByZXN0cmljdGVkIHRvIGEgc2VydmljZSBsZXZlbCAo
YnkgY2xpZW50PyBieSBzZXJ2ZXI/ICkgb25jZSB0aGV5IGFyZSB1c2VkLg0KPj4+Pj4+Pj4+Pj4g
DQo+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+PiBUaGUga2V5IGlzc3VlIHNlZW1zIHRvIGJlIHdo
eSwgd2hlbiB0aGUgbW91bnQNCj4+Pj4+Pj4+Pj4+PiBpcyBmaXJzdCBlc3RhYmxpc2hlZCwgdGhl
IGNsaWVudCBpcyBjb3JyZWN0bHkgYWJsZSB0bw0KPj4+Pj4+Pj4+Pj4+IGVzdGFibGlzaCB0d28g
c2VwYXJhdGUgR1NTIGNvbnRleHRzIGZvciBVSUQgMDsgYnV0IGFmdGVyDQo+Pj4+Pj4+Pj4+Pj4g
YSBzZXJ2ZXIgcmVib290LCB0aGUgY2xpZW50IGF0dGVtcHRzIHRvIHVzZSB0aGUgc2FtZSBHU1MN
Cj4+Pj4+Pj4+Pj4+PiBjb250ZXh0IHdpdGggdHdvIGRpZmZlcmVudCBHU1Mgc2VydmljZXMuDQo+
Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+IEkgc3BlY3VsYXRlIHRoYXQgaXQgaXMgYSByYWNlIGJl
dHdlZW4gdGhlIFdSSVRFIGFuZCB0aGUgUkVORVcgdG8gdXNlIHRoZSBzYW1lIG5ld2x5IGNyZWF0
ZWQgR1NTIGNvbnRleHQgdGhhdCBoYXMgbm90IGJlZW4gdXNlZCB5ZXQsIGFuZCBzbyBoYXMgbm8g
YXNzaWduZWQgc2VydmljZSBsZXZlbCwgYW5kIHRoZSB0d28gcmVxdWVzdHMgcmFjZSB0byBzZXQg
dGhlIHNlcnZpY2UgbGV2ZWwuDQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBJIGFncmVlIHdpdGgg
QW5keS4gSXQgbXVzdCBiZSBhIHRpZ2h0IHJhY2UuDQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gSW4g
b25lIGNhcHR1cmUgSSBzZWUgc29tZXRoaW5nIGxpa2UgdGhpcyBhZnRlcg0KPj4+Pj4+Pj4+IHRo
ZSBzZXJ2ZXIgcmVzdGFydHM6DQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gU1lODQo+Pj4+Pj4+Pj4g
U1lOLCBBQ0sNCj4+Pj4+Pj4+PiBBQ0sNCj4+Pj4+Pj4+PiBDIFdSSVRFDQo+Pj4+Pj4+Pj4gQyBT
RVFVRU5DRQ0KPj4+Pj4+Pj4+IFIgV1JJVEUgLT4gQ1RYX1BST0JMRU0NCj4+Pj4+Pj4+PiBSIFNF
UVVFTkNFIC0+IENUWF9QUk9CTEVNDQo+Pj4+Pj4+Pj4gQyBOVUxMIChLUkI1X0FQX1JFUSkNCj4+
Pj4+Pj4+PiBSIE5VTEwgKEtSQjVfQVBfUkVQKQ0KPj4+Pj4+Pj4+IEMgV1JJVEUNCj4+Pj4+Pj4+
PiBDIFNFUVVFTkNFDQo+Pj4+Pj4+Pj4gUiBXUklURSAtPiBORlM0RVJSX1NUQUxFX1NUQVRFSUQN
Cj4+Pj4+Pj4+PiBSIFNFUVVFTkNFIC0+IEFVVEhfRkFJTEVEDQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+
Pj4gQW5keSdzIHRoZW9yeSBuZWF0bHkgZXhwbGFpbnMgdGhpcyBiZWhhdmlvci4NCj4+Pj4+Pj4+
PiANCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gSSBoYXZlIHRyaWVkIHRvIHJlcHJvZHVjZQ0KPj4+
Pj4+Pj4+PiB5b3VyIHNjZW5hcmlvIGFuZCBpbiBteSB0ZXN0cyBvZiByZWJvb3RpbmcgdGhlIHNl
cnZlciBhbGwgcmVjb3Zlcg0KPj4+Pj4+Pj4+PiBjb3JyZWN0bHkuIEluIG15IGNhc2UsIGlmIFJF
TkVXIHdhcyB0aGUgb25lIGhpdHRpbmcgdGhlIEFVVEhfRVJSIHRoZW4NCj4+Pj4+Pj4+Pj4gdGhl
IG5ldyBjb250ZXh0IGlzIGVzdGFibGlzaGVkIGFuZCB0aGVuIFJFTkVXIHVzaW5nIGludGVncml0
eSBzZXJ2aWNlDQo+Pj4+Pj4+Pj4+IGlzIHJldHJpZWQgd2l0aCB0aGUgbmV3IGNvbnRleHQgd2hp
Y2ggZ2V0cyBFUlJfU1RBTEVfQ0xJRU5USUQgd2hpY2gNCj4+Pj4+Pj4+Pj4gdGhlbiBjbGllbnQg
cmVjb3ZlcnMgZnJvbS4gSWYgaXQncyBhbiBvcGVyYXRpb24gKEkgaGF2ZSBhIEdFVEFUVFIpDQo+
Pj4+Pj4+Pj4+IHRoYXQgZ2V0cyBBVVRIX0VSUiwgdGhlbiBpdCBnZXRzIG5ldyBjb250ZXh0IGFu
ZCBpcyByZXRyaWVkIHVzaW5nIG5vbmUNCj4+Pj4+Pj4+Pj4gc2VydmljZS4gVGhlbiBSRU5FVyBn
ZXRzIGl0cyBvd24gQVVUSF9FUlIgYXMgaXQgdXNlcyBhIGRpZmZlcmVudA0KPj4+Pj4+Pj4+PiBj
b250ZXh0LCBhIG5ldyBjb250ZXh0IGlzIGdvdHRlbiwgUkVORVcgaXMgcmV0cmllZCBvdmVyIGlu
dGVncml0eSBhbmQNCj4+Pj4+Pj4+Pj4gZ2V0cyBFUlJfU1RBTEVfQ0xJRU5USUQgd2hpY2ggaXQg
cmVjb3ZlcnMgZnJvbS4NCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBJZiBvbmUgb3BlcmF0aW9uIGlz
IGFsbG93ZWQgdG8gY29tcGxldGUsIHRoZW4NCj4+Pj4+Pj4+PiB0aGUgb3RoZXIgd2lsbCBhbHdh
eXMgcmVjb2duaXplIHRoYXQgYW5vdGhlcg0KPj4+Pj4+Pj4+IGZyZXNoIEdTUyBjb250ZXh0IGlz
IG5lZWRlZC4gSWYgdHdvIGFyZSBzZW50DQo+Pj4+Pj4+Pj4gYXQgdGhlIHNhbWUgdGltZSwgdGhl
eSByYWNlIGFuZCBvbmUgYWx3YXlzDQo+Pj4+Pj4+Pj4gZmFpbHMuDQo+Pj4+Pj4+Pj4gDQo+Pj4+
Pj4+Pj4gSGVsZW4ncyB0ZXN0IGluY2x1ZGVzIGEgc2Vjb25kIGlkbGUgbW91bnQgcG9pbnQNCj4+
Pj4+Pj4+PiAoc2VjPWtyYjVpKSBhbmQgbWF5YmUgdGhhdCBpcyBuZWVkZWQgdG8gdHJpZ2dlcg0K
Pj4+Pj4+Pj4+IHRoZSByYWNlPw0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBDaHVjaywgYW55IGNoYW5j
ZSB0byBnZXQgInJwY2RlYnVnIC1tIHJwYyBhdXRoIiBvdXRwdXQgZHVyaW5nIHRoZQ0KPj4+Pj4+
Pj4gZmFpbHVyZSAoZ3NzZCBvcHRpb25hbGx5KSAoaSByZWFsaXplIHRoYXQgaXQgbWlnaHQgYWx0
ZXIgdGhlIHRpbWluZ3MNCj4+Pj4+Pj4+IGFuZCBub3QgaGl0IHRoZSBpc3N1ZSBidXQgd29ydGgg
YSBzaG90KT8NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IEknbSBzdXJlIHRoYXQncyBmaW5lLiBBbiBpbnRl
cm5hbCB0ZXN0ZXIgaGl0IHRoaXMsDQo+Pj4+Pj4+IG5vdCBhIGN1c3RvbWVyLCBzbyBJIHdpbGwg
YXNrLg0KPj4+Pj4+PiANCj4+Pj4+Pj4gSSBhZ3JlZSwgdGhvdWdoLCB0aGF0IHRpbWluZyBtaWdo
dCBiZSBhIHByb2JsZW06DQo+Pj4+Pj4+IHRoZXNlIHN5c3RlbXMgYWxsIGhhdmUgcmVhbCBzZXJp
YWwgY29uc29sZXMgdmlhDQo+Pj4+Pj4+IGlMT00sIHNvIC92L2wvbSB0cmFmZmljIGRvZXMgYnJp
bmcgZXZlcnl0aGluZyB0bw0KPj4+Pj4+PiBhIHN0YW5kc3RpbGwuDQo+Pj4+Pj4+IA0KPj4+Pj4+
PiBNZWFud2hpbGUsIHdoYXQncyB5b3UncmUgb3BpbmlvbiBhYm91dCBBVVRIX0ZBSUxFRD8NCj4+
Pj4+Pj4gU2hvdWxkIHRoZSBzZXJ2ZXIgcmV0dXJuIFJQQ1NFQ19HU1NfQ1RYUFJPQkxFTQ0KPj4+
Pj4+PiBpbiB0aGlzIGNhc2UgaW5zdGVhZD8gSWYgaXQgZGlkLCBkbyB5b3UgdGhpbmsNCj4+Pj4+
Pj4gdGhlIExpbnV4IGNsaWVudCB3b3VsZCByZWNvdmVyIGJ5IGNyZWF0aW5nIGENCj4+Pj4+Pj4g
cmVwbGFjZW1lbnQgR1NTIGNvbnRleHQ/DQo+Pj4+Pj4gDQo+Pj4+Pj4gQWgsIHllcywgSSBlcXVh
dGVkIEFVVEhfRkFJTEVEIEFuZCBBVVRIX0VSUk9SIGluIG15IG1pbmQuIElmIGNsaWVudA0KPj4+
Pj4+IHJlY2VpdmVzIHRoZSByZWFzb24gYXMgQVVUSF9GQUlMRUQgYXMgb3Bwb3NlIHRvIENUWFBS
T0JMRU0gaXQgd2lsbA0KPj4+Pj4+IGZhaWwgd2l0aCBFSU8gZXJyb3IgYW5kIHdpbGwgbm90IHRy
eSB0byBjcmVhdGUgYSBuZXcgR1NTIGNvbnRleHQuIFNvDQo+Pj4+Pj4geWVzLCBJIGJlbGlldmUg
aXQgd291bGQgaGVscCBpZiB0aGUgc2VydmVyIHJldHVybnMgYW55IG9mIHRoZQ0KPj4+Pj4+IGZv
bGxvd2luZyBlcnJvcnM6DQo+Pj4+Pj4gICAgICAgICAgICAgIGNhc2UgUlBDX0FVVEhfUkVKRUNU
RURDUkVEOg0KPj4+Pj4+ICAgICAgICAgICAgICBjYXNlIFJQQ19BVVRIX1JFSkVDVEVEVkVSRjoN
Cj4+Pj4+PiAgICAgICAgICAgICAgY2FzZSBSUENTRUNfR1NTX0NSRURQUk9CTEVNOg0KPj4+Pj4+
ICAgICAgICAgICAgICBjYXNlIFJQQ1NFQ19HU1NfQ1RYUFJPQkxFTToNCj4+Pj4+PiANCj4+Pj4+
PiB0aGVuIHRoZSBjbGllbnQgd2lsbCByZWNyZWF0ZSB0aGUgY29udGV4dC4NCj4+Pj4+IA0KPj4+
Pj4gQWxzbyBpbiBteSB0ZXN0aW5nLCBJIGNhbiBzZWUgdGhhdCBjcmVkZW50aWFsIGNhY2hlIGlz
IHBlciBnc3MgZmxhdm9yLg0KPj4+Pj4gSnVzdCB0byBjaGVjaywgd2hhdCBrZXJuZWwgdmVyc2lv
biBpcyB0aGlzIHByb2JsZW0gZW5jb3VudGVyZWQgb24gKEkNCj4+Pj4+IGtub3cgeW91IHNhaWQg
dXBzdHJlYW0pIGJ1dCBJIGp1c3Qgd2FudCB0byBkb3VibGUgY2hlY2sgc28gdGhhdCBJIGNhbg0K
Pj4+Pj4gbG9vayBhdCB0aGUgY29ycmVjdCBzb3VyY2UgY29kZS4NCj4+Pj4gDQo+Pj4+IHY0LjEu
MTIgKHN0YWJsZSkgSSB0aGluay4NCj4+PiANCj4+PiBBbHNvLCBjYW4geW91IHNoYXJlIHRoZSBu
ZXR3b3JrIHRyYWNlPw0KPj4gDQo+PiBIaSBDaHVjaywNCj4+IA0KPj4gSSB3YXMgZmluYWxseSBh
YmxlIHRvIHJlcHJvZHVjZSB0aGUgY29uZGl0aW9uIHlvdSB3ZXJlIHNlZWluZyAoaS5lLiwNCj4+
IHRoZSB1c2Ugb2YgdGhlIHNhbWUgY29udGV4dCBmb3IgZGlmZmVyZW50IGdzcyBzZXJ2aWNlcyku
DQo+PiANCj4+IEkgZW5hYmxlZCBycGNkZWJ1ZyBycGMgYXV0aCBhbmQgSSBjYW4gc2VlIHRoYXQg
dGhlIDJuZCByZXF1ZXN0IGVuZHMgdXANCj4+IGZpbmRpbmcgYSBnc3NfdXBjYWxsIG1lc3NhZ2Ug
YmVjYXVzZSBpdCdzIGp1c3QgbWF0Y2hlZCBieSB0aGUgdWlkLg0KPj4gVGhlcmUgaXMgZXZlbiBh
IGNvbW1lbnQgaW4gYXV0aF9nc3MvYXV0aF9nc3MuYyBpbiBnc3NfYWRkX21zZygpIHNheWluZw0K
Pj4gdGhhdCBpZiB0aGVyZSBpcyB1cGNhbGwgZm9yIGFuIHVpZCB0aGVuIGl0IHdvbid0IGFkZCBh
bm90aGVyIHVwY2FsbC4NCj4+IFNvIEkgdGhpbmsgdGhlIGRlY2lzaW9uIGlzIG1hZGUgcmlnaHQg
dGhlcmUgdG8gc2hhcmUgdGhlIHNhbWUgY29udGV4dA0KPj4gbm8gbWF0dGVyIHRoZSBnc3Mgc2Vy
dmljZS4NCj4gDQo+IElmIEkgdW5kZXJzdGFuZCBjb3JyZWN0bHksIHRoYXQncyBqdXN0IHdoYXQg
QW5keSBwcmVkaWN0ZWQuDQo+IA0KPiBUaGF0IGNoZWNrIG5lZWRzIHRvIGJlIGNoYW5nZWQgdG8g
YWxsb3cgYW5vdGhlciB1cGNhbGwgdG8gYmUNCj4gcXVldWVkIGlmIHRoZSBVSUQgbWF0Y2hlcyBi
dXQgdGhlIEdTUyBzZXJ2aWNlIGRvZXMgbm90Lg0KDQpZZXMsIHdlIG5lZWQgdG8gZW5zdXJlIHRo
YXQgdGhlIHNlcnZpY2UgaXMgc2V0IGV2ZW4gdGhvdWdoIGl0IGlzIGlnbm9yZWQgYnkgdGhlIHNl
cnZlci4NCg0K4oCUPkFuZHkNCg0KPiANCj4gDQo+IC0tDQo+IENodWNrIExldmVyDQoNCg==

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-07-29 17:07                       ` Adamson, Andy
@ 2016-07-29 17:32                         ` Adamson, Andy
  2016-07-29 22:24                           ` Olga Kornievskaia
  0 siblings, 1 reply; 25+ messages in thread
From: Adamson, Andy @ 2016-07-29 17:32 UTC (permalink / raw)
  To: Adamson, Andy; +Cc: Chuck Lever, Olga Kornievskaia, Linux NFS Mailing List

DQo+IE9uIEp1bCAyOSwgMjAxNiwgYXQgMTowNyBQTSwgQWRhbXNvbiwgQW5keSA8V2lsbGlhbS5B
ZGFtc29uQG5ldGFwcC5jb20+IHdyb3RlOg0KPiANCj4+IA0KPj4gT24gSnVsIDI5LCAyMDE2LCBh
dCAxMjozOCBQTSwgQ2h1Y2sgTGV2ZXIgPGNodWNrLmxldmVyQG9yYWNsZS5jb20+IHdyb3RlOg0K
Pj4gDQo+Pj4gDQo+Pj4gT24gSnVsIDI5LCAyMDE2LCBhdCAxMjoyNyBQTSwgT2xnYSBLb3JuaWV2
c2thaWEgPGFnbG9AdW1pY2guZWR1PiB3cm90ZToNCj4+PiANCj4+PiBPbiBNb24sIEp1bCAyNSwg
MjAxNiBhdCAyOjE4IFBNLCBPbGdhIEtvcm5pZXZza2FpYSA8YWdsb0B1bWljaC5lZHU+IHdyb3Rl
Og0KPj4+PiBPbiBUaHUsIEp1bCAyMSwgMjAxNiBhdCA1OjMyIFBNLCBDaHVjayBMZXZlciA8Y2h1
Y2subGV2ZXJAb3JhY2xlLmNvbT4gd3JvdGU6DQo+Pj4+PiANCj4+Pj4+IA0KPj4+Pj4+IE9uIEp1
bCAyMSwgMjAxNiwgYXQgMTA6NDYgUE0sIE9sZ2EgS29ybmlldnNrYWlhIDxhZ2xvQHVtaWNoLmVk
dT4gd3JvdGU6DQo+Pj4+Pj4gDQo+Pj4+Pj4+IE9uIFRodSwgSnVsIDIxLCAyMDE2IGF0IDM6NTQg
UE0sIE9sZ2EgS29ybmlldnNrYWlhIDxhZ2xvQHVtaWNoLmVkdT4gd3JvdGU6DQo+Pj4+Pj4+PiBP
biBUaHUsIEp1bCAyMSwgMjAxNiBhdCAxOjU2IFBNLCBDaHVjayBMZXZlciA8Y2h1Y2subGV2ZXJA
b3JhY2xlLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBPbiBKdWwgMjEsIDIwMTYs
IGF0IDY6MDQgUE0sIE9sZ2EgS29ybmlldnNrYWlhIDxhZ2xvQHVtaWNoLmVkdT4gd3JvdGU6DQo+
Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gT24gVGh1LCBKdWwgMjEsIDIwMTYgYXQgMjo1NSBBTSwgQ2h1
Y2sgTGV2ZXIgPGNodWNrLmxldmVyQG9yYWNsZS5jb20+IHdyb3RlOg0KPj4+Pj4+Pj4+PiANCj4+
Pj4+Pj4+Pj4+IE9uIEp1bCAyMCwgMjAxNiwgYXQgNjo1NiBQTSwgT2xnYSBLb3JuaWV2c2thaWEg
PGFnbG9AdW1pY2guZWR1PiB3cm90ZToNCj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4gT24gV2Vk
LCBKdWwgMjAsIDIwMTYgYXQgNToxNCBBTSwgQWRhbXNvbiwgQW5keQ0KPj4+Pj4+Pj4+Pj4gPFdp
bGxpYW0uQWRhbXNvbkBuZXRhcHAuY29tPiB3cm90ZToNCj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+
Pj4+Pj4gT24gSnVsIDE5LCAyMDE2LCBhdCAxMDo1MSBBTSwgQ2h1Y2sgTGV2ZXIgPGNodWNrLmxl
dmVyQG9yYWNsZS5jb20+IHdyb3RlOg0KPj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4gSGkg
QW5keS0NCj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+IFRoYW5rcyBmb3IgdGFraW5nIHRo
ZSB0aW1lIHRvIGRpc2N1c3MgdGhpcyB3aXRoIG1lLiBJJ3ZlDQo+Pj4+Pj4+Pj4+Pj4+IGNvcGll
ZCBsaW51eC1uZnMgdG8gbWFrZSB0aGlzIGUtbWFpbCBhbHNvIGFuIHVwc3RyZWFtIGJ1Zw0KPj4+
Pj4+Pj4+Pj4+PiByZXBvcnQuDQo+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+PiBBcyB3ZSBz
YXcgaW4gdGhlIG5ldHdvcmsgY2FwdHVyZSwgcmVjb3Zlcnkgb2YgR1NTIGNvbnRleHRzDQo+Pj4+
Pj4+Pj4+Pj4+IGFmdGVyIGEgc2VydmVyIHJlYm9vdCBmYWlscyBpbiBjZXJ0YWluIGNhc2VzIHdp
dGggTkZTdjQuMA0KPj4+Pj4+Pj4+Pj4+PiBhbmQgTkZTdjQuMSBtb3VudCBwb2ludHMuDQo+Pj4+
Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+PiBUaGUgcmVwcm9kdWNlciBpcyBhIHNpbXBsZSBwcm9n
cmFtIHRoYXQgZ2VuZXJhdGVzIG9uZSBORlMNCj4+Pj4+Pj4+Pj4+Pj4gV1JJVEUgcGVyaW9kaWNh
bGx5LCBydW4gd2hpbGUgdGhlIHNlcnZlciByZXBlYXRlZGx5IHJlYm9vdHMNCj4+Pj4+Pj4+Pj4+
Pj4gKG9yIG9uZSBjbHVzdGVyIGhlYWQgZmFpbHMgb3ZlciB0byB0aGUgb3RoZXIgYW5kIGJhY2sp
LiBUaGUNCj4+Pj4+Pj4+Pj4+Pj4gZ29hbCBvZiB0aGUgcmVwcm9kdWNlciBpcyB0byBpZGVudGlm
eSBwcm9ibGVtcyB3aXRoIHN0YXRlDQo+Pj4+Pj4+Pj4+Pj4+IHJlY292ZXJ5IHdpdGhvdXQgYSBs
b3Qgb2Ygb3RoZXIgSS9PIGdvaW5nIG9uIHRvIGNsdXR0ZXIgdXANCj4+Pj4+Pj4+Pj4+Pj4gdGhl
IG5ldHdvcmsgY2FwdHVyZS4NCj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+IEluIHRoZSBm
YWlsaW5nIGNhc2UsIHNlYz1rcmI1IGlzIHNwZWNpZmllZCBvbiB0aGUgbW91bnQNCj4+Pj4+Pj4+
Pj4+Pj4gcG9pbnQsIGFuZCB0aGUgcmVwcm9kdWNlciBpcyBydW4gYXMgcm9vdC4gV2UndmUgZm91
bmQgdGhpcw0KPj4+Pj4+Pj4+Pj4+PiBjb21iaW5hdGlvbiBmYWlscyB3aXRoIGJvdGggTkZTdjQu
MCBhbmQgTkZTdjQuMS4NCj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+IEF0IG1vdW50IHRp
bWUsIHRoZSBjbGllbnQgZXN0YWJsaXNoZXMgYSBHU1MgY29udGV4dCBmb3INCj4+Pj4+Pj4+Pj4+
Pj4gbGVhc2UgbWFuYWdlbWVudCBvcGVyYXRpb25zLCB3aGljaCBpcyBib3VuZCB0byB0aGUgY2xp
ZW50J3MNCj4+Pj4+Pj4+Pj4+Pj4gTkZTIHNlcnZpY2UgcHJpbmNpcGFsIGFuZCB1c2VzIEdTUyBz
ZXJ2aWNlICJpbnRlZ3JpdHkuIg0KPj4+Pj4+Pj4+Pj4+PiBDYWxsIHRoaXMgR1NTIGNvbnRleHQg
MS4NCj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+IFdoZW4gdGhlIHJlcHJvZHVjZXIgc3Rh
cnRzLCBhIHNlY29uZCBHU1MgY29udGV4dCBpcw0KPj4+Pj4+Pj4+Pj4+PiBlc3RhYmxpc2hlZCBm
b3IgTkZTIG9wZXJhdGlvbnMgYXNzb2NpYXRlZCB3aXRoIHRoYXQgdXNlci4NCj4+Pj4+Pj4+Pj4+
Pj4gU2luY2UgdGhlIHJlcHJvZHVjZXIgaXMgcnVubmluZyBhcyByb290LCB0aGlzIGNvbnRleHQg
aXMNCj4+Pj4+Pj4+Pj4+Pj4gYWxzbyBib3VuZCB0byB0aGUgY2xpZW50J3MgTkZTIHNlcnZpY2Ug
cHJpbmNpcGFsLCBidXQgaXQNCj4+Pj4+Pj4+Pj4+Pj4gdXNlcyB0aGUgR1NTIHNlcnZpY2UgIm5v
bmUiIChyZWZsZWN0aW5nIHRoZSBleHBsaWNpdA0KPj4+Pj4+Pj4+Pj4+PiByZXF1ZXN0IGZvciAi
c2VjPWtyYjUiKS4gQ2FsbCB0aGlzIEdTUyBjb250ZXh0IDIuDQo+Pj4+Pj4+Pj4+Pj4+IA0KPj4+
Pj4+Pj4+Pj4+PiBBZnRlciB0aGUgc2VydmVyIHJlYm9vdHMsIHRoZSBjbGllbnQgcmUtZXN0YWJs
aXNoZXMgYSBUQ1ANCj4+Pj4+Pj4+Pj4+Pj4gY29ubmVjdGlvbiB3aXRoIHRoZSBzZXJ2ZXIsIGFu
ZCBwZXJmb3JtcyBhIFJFTkVXDQo+Pj4+Pj4+Pj4+Pj4+IG9wZXJhdGlvbiB1c2luZyBjb250ZXh0
IDEuIFRoYW5rcyB0byB0aGUgc2VydmVyIHJlYm9vdCwNCj4+Pj4+Pj4+Pj4+Pj4gY29udGV4dHMg
MSBhbmQgMiBhcmUgbm93IHN0YWxlLiBUaGUgc2VydmVyIHRodXMgcmVqZWN0cw0KPj4+Pj4+Pj4+
Pj4+PiB0aGUgUlBDIHdpdGggUlBDU0VDX0dTU19DVFhQUk9CTEVNLg0KPj4+Pj4+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+Pj4+Pj4gVGhlIGNsaWVudCBwZXJmb3JtcyBhIEdTU19JTklUX1NFQ19DT05URVhU
IHZpYSBhbiBORlN2NA0KPj4+Pj4+Pj4+Pj4+PiBOVUxMIG9wZXJhdGlvbi4gQ2FsbCB0aGlzIEdT
UyBjb250ZXh0IDMuDQo+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+PiBJbnRlcmVzdGluZ2x5
LCB0aGUgY2xpZW50IGRvZXMgbm90IHJlc2VuZCB0aGUgUkVORVcNCj4+Pj4+Pj4+Pj4+Pj4gb3Bl
cmF0aW9uIGF0IHRoaXMgcG9pbnQgKGlmIGl0IGRpZCwgd2Ugd291bGRuJ3Qgc2VlIHRoaXMNCj4+
Pj4+Pj4+Pj4+Pj4gcHJvYmxlbSBhdCBhbGwpLg0KPj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+
Pj4gVGhlIGNsaWVudCB0aGVuIGF0dGVtcHRzIHRvIHJlc3VtZSB0aGUgcmVwcm9kdWNlciB3b3Jr
bG9hZC4NCj4+Pj4+Pj4+Pj4+Pj4gSXQgc2VuZHMgYW4gTkZTdjQgV1JJVEUgb3BlcmF0aW9uLCB1
c2luZyB0aGUgZmlyc3QgYXZhaWxhYmxlDQo+Pj4+Pj4+Pj4+Pj4+IEdTUyBjb250ZXh0IGluIFVJ
RCAwJ3MgY3JlZGVudGlhbCBjYWNoZSwgd2hpY2ggaXMgY29udGV4dCAzLA0KPj4+Pj4+Pj4+Pj4+
PiBhbHJlYWR5IGJvdW5kIHRvIHRoZSBjbGllbnQncyBORlMgc2VydmljZSBwcmluY2lwYWwuIEJ1
dCBHU1MNCj4+Pj4+Pj4+Pj4+Pj4gc2VydmljZSAibm9uZSIgaXMgdXNlZCBmb3IgdGhpcyBvcGVy
YXRpb24sIHNpbmNlIGl0IGlzIG9uDQo+Pj4+Pj4+Pj4+Pj4+IGJlaGFsZiBvZiB0aGUgbW91bnQg
d2hlcmUgc2VjPWtyYjUgd2FzIHNwZWNpZmllZC4NCj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+
Pj4+IFRoZSBSUEMgaXMgYWNjZXB0ZWQsIGJ1dCB0aGUgc2VydmVyIHJlcG9ydHMNCj4+Pj4+Pj4+
Pj4+Pj4gTkZTNEVSUl9TVEFMRV9TVEFURUlELCBzaW5jZSBpdCBoYXMgcmVjZW50bHkgcmVib290
ZWQuDQo+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+PiBUaGUgY2xpZW50IHJlc3BvbmRzIGJ5
IGF0dGVtcHRpbmcgc3RhdGUgcmVjb3ZlcnkuIFRoZQ0KPj4+Pj4+Pj4+Pj4+PiBmaXJzdCBvcGVy
YXRpb24gaXQgdHJpZXMgaXMgYW5vdGhlciBSRU5FVy4gU2luY2UgdGhpcyBpcw0KPj4+Pj4+Pj4+
Pj4+PiBhIGxlYXNlIG1hbmFnZW1lbnQgb3BlcmF0aW9uLCB0aGUgY2xpZW50IGxvb2tzIGluIFVJ
RCAwJ3MNCj4+Pj4+Pj4+Pj4+Pj4gY3JlZGVudGlhbCBjYWNoZSBhZ2FpbiBhbmQgZmluZHMgdGhl
IHJlY2VudGx5IGVzdGFibGlzaGVkDQo+Pj4+Pj4+Pj4+Pj4+IGNvbnRleHQgMy4gSXQgdHJpZXMg
dGhlIFJFTkVXIG9wZXJhdGlvbiB1c2luZyBHU1MgY29udGV4dA0KPj4+Pj4+Pj4+Pj4+PiAzIHdp
dGggR1NTIHNlcnZpY2UgImludGVncml0eS4iDQo+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+
PiBUaGUgc2VydmVyIHJlamVjdHMgdGhlIFJFTkVXIFJQQyB3aXRoIEFVVEhfRkFJTEVELCBhbmQN
Cj4+Pj4+Pj4+Pj4+Pj4gdGhlIGNsaWVudCByZXBvcnRzIHRoYXQgImNoZWNrIGxlYXNlIGZhaWxl
ZCIgYW5kDQo+Pj4+Pj4+Pj4+Pj4+IHRlcm1pbmF0ZXMgc3RhdGUgcmVjb3ZlcnkuDQo+Pj4+Pj4+
Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+PiBUaGUgY2xpZW50IHJlLWRyaXZlcyB0aGUgV1JJVEUgb3Bl
cmF0aW9uIHdpdGggdGhlIHN0YWxlDQo+Pj4+Pj4+Pj4+Pj4+IHN0YXRlaWQgd2l0aCBwcmVkaWN0
YWJsZSByZXN1bHRzLiBUaGUgY2xpZW50IGFnYWluIHRyaWVzDQo+Pj4+Pj4+Pj4+Pj4+IHRvIHJl
Y292ZXIgc3RhdGUgYnkgc2VuZGluZyBhIFJFTkVXLCBhbmQgc3RpbGwgdXNlcyB0aGUNCj4+Pj4+
Pj4+Pj4+Pj4gc2FtZSBHU1MgY29udGV4dCAzIHdpdGggc2VydmljZSAiaW50ZWdyaXR5IiBhbmQg
Z2V0cyB0aGUNCj4+Pj4+Pj4+Pj4+Pj4gc2FtZSByZXN1bHQuIEEgKHBlcmhhcHMgc2xvdy1tb3Rp
b24pIFNUQUxFX1NUQVRFSUQgbG9vcA0KPj4+Pj4+Pj4+Pj4+PiBlbnN1ZXMsIGFuZCB0aGUgY2xp
ZW50IG1vdW50IHBvaW50IGlzIGRlYWRsb2NrZWQuDQo+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+
Pj4+PiBZb3VyIGFuYWx5c2lzIHdhcyB0aGF0IGJlY2F1c2UgdGhlIHJlcHJvZHVjZXIgaXMgcnVu
IGFzDQo+Pj4+Pj4+Pj4+Pj4+IHJvb3QsIGJvdGggdGhlIHJlcHJvZHVjZXIncyBJL08gb3BlcmF0
aW9ucywgYW5kIGxlYXNlDQo+Pj4+Pj4+Pj4+Pj4+IG1hbmFnZW1lbnQgb3BlcmF0aW9ucywgYXR0
ZW1wdCB0byB1c2UgdGhlIHNhbWUgR1NTIGNvbnRleHQNCj4+Pj4+Pj4+Pj4+Pj4gaW4gVUlEIDAn
cyBjcmVkZW50aWFsIGNhY2hlLCBidXQgZWFjaCB1c2VzIGRpZmZlcmVudCBHU1MNCj4+Pj4+Pj4+
Pj4+Pj4gc2VydmljZXMuDQo+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4gQXMgUkZDMjIwMyBz
dGF0ZXMsICJJbiBhIGNyZWF0aW9uIHJlcXVlc3QsIHRoZSBzZXFfbnVtIGFuZCBzZXJ2aWNlIGZp
ZWxkcyBhcmUgdW5kZWZpbmVkIGFuZCBib3RoIG11c3QgYmUgaWdub3JlZCBieSB0aGUgc2VydmVy
4oCdDQo+Pj4+Pj4+Pj4+Pj4gU28gYSBjb250ZXh0IGNyZWF0aW9uIHJlcXVlc3Qgd2hpbGUga2lj
a2VkIG9mZiBieSBhbiBvcGVyYXRpb24gd2l0aCBhIHNlcnZpY2UgYXR0YWNoZWQgKGUuZy4gV1JJ
VEUgdXNlcyBycGNfZ3NzX3N2Y19ub25lIGFuZCBSRU5FVyB1c2VzIHJwY19nc3Nfc3ZjX2ludGVn
cml0eSksIGNhbiBiZSB1c2VkIGJ5IGVpdGhlciBzZXJ2aWNlIGxldmVsLg0KPj4+Pj4+Pj4+Pj4+
IEFGQUlDUyBhIHNpbmdsZSBHU1MgY29udGV4dCBjb3VsZCBpbiB0aGVvcnkgYmUgdXNlZCBmb3Ig
YWxsIHNlcnZpY2UgbGV2ZWxzLCBidXQgaW4gcHJhY3RpY2UsIEdTUyBjb250ZXh0cyBhcmUgcmVz
dHJpY3RlZCB0byBhIHNlcnZpY2UgbGV2ZWwgKGJ5IGNsaWVudD8gYnkgc2VydmVyPyApIG9uY2Ug
dGhleSBhcmUgdXNlZC4NCj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+
Pj4gVGhlIGtleSBpc3N1ZSBzZWVtcyB0byBiZSB3aHksIHdoZW4gdGhlIG1vdW50DQo+Pj4+Pj4+
Pj4+Pj4+IGlzIGZpcnN0IGVzdGFibGlzaGVkLCB0aGUgY2xpZW50IGlzIGNvcnJlY3RseSBhYmxl
IHRvDQo+Pj4+Pj4+Pj4+Pj4+IGVzdGFibGlzaCB0d28gc2VwYXJhdGUgR1NTIGNvbnRleHRzIGZv
ciBVSUQgMDsgYnV0IGFmdGVyDQo+Pj4+Pj4+Pj4+Pj4+IGEgc2VydmVyIHJlYm9vdCwgdGhlIGNs
aWVudCBhdHRlbXB0cyB0byB1c2UgdGhlIHNhbWUgR1NTDQo+Pj4+Pj4+Pj4+Pj4+IGNvbnRleHQg
d2l0aCB0d28gZGlmZmVyZW50IEdTUyBzZXJ2aWNlcy4NCj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+
Pj4+PiBJIHNwZWN1bGF0ZSB0aGF0IGl0IGlzIGEgcmFjZSBiZXR3ZWVuIHRoZSBXUklURSBhbmQg
dGhlIFJFTkVXIHRvIHVzZSB0aGUgc2FtZSBuZXdseSBjcmVhdGVkIEdTUyBjb250ZXh0IHRoYXQg
aGFzIG5vdCBiZWVuIHVzZWQgeWV0LCBhbmQgc28gaGFzIG5vIGFzc2lnbmVkIHNlcnZpY2UgbGV2
ZWwsIGFuZCB0aGUgdHdvIHJlcXVlc3RzIHJhY2UgdG8gc2V0IHRoZSBzZXJ2aWNlIGxldmVsLg0K
Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+PiBJIGFncmVlIHdpdGggQW5keS4gSXQgbXVzdCBiZSBh
IHRpZ2h0IHJhY2UuDQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBJbiBvbmUgY2FwdHVyZSBJIHNl
ZSBzb21ldGhpbmcgbGlrZSB0aGlzIGFmdGVyDQo+Pj4+Pj4+Pj4+IHRoZSBzZXJ2ZXIgcmVzdGFy
dHM6DQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBTWU4NCj4+Pj4+Pj4+Pj4gU1lOLCBBQ0sNCj4+
Pj4+Pj4+Pj4gQUNLDQo+Pj4+Pj4+Pj4+IEMgV1JJVEUNCj4+Pj4+Pj4+Pj4gQyBTRVFVRU5DRQ0K
Pj4+Pj4+Pj4+PiBSIFdSSVRFIC0+IENUWF9QUk9CTEVNDQo+Pj4+Pj4+Pj4+IFIgU0VRVUVOQ0Ug
LT4gQ1RYX1BST0JMRU0NCj4+Pj4+Pj4+Pj4gQyBOVUxMIChLUkI1X0FQX1JFUSkNCj4+Pj4+Pj4+
Pj4gUiBOVUxMIChLUkI1X0FQX1JFUCkNCj4+Pj4+Pj4+Pj4gQyBXUklURQ0KPj4+Pj4+Pj4+PiBD
IFNFUVVFTkNFDQo+Pj4+Pj4+Pj4+IFIgV1JJVEUgLT4gTkZTNEVSUl9TVEFMRV9TVEFURUlEDQo+
Pj4+Pj4+Pj4+IFIgU0VRVUVOQ0UgLT4gQVVUSF9GQUlMRUQNCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+
Pj4+IEFuZHkncyB0aGVvcnkgbmVhdGx5IGV4cGxhaW5zIHRoaXMgYmVoYXZpb3IuDQo+Pj4+Pj4+
Pj4+IA0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+IEkgaGF2ZSB0cmllZCB0byByZXByb2R1Y2UN
Cj4+Pj4+Pj4+Pj4+IHlvdXIgc2NlbmFyaW8gYW5kIGluIG15IHRlc3RzIG9mIHJlYm9vdGluZyB0
aGUgc2VydmVyIGFsbCByZWNvdmVyDQo+Pj4+Pj4+Pj4+PiBjb3JyZWN0bHkuIEluIG15IGNhc2Us
IGlmIFJFTkVXIHdhcyB0aGUgb25lIGhpdHRpbmcgdGhlIEFVVEhfRVJSIHRoZW4NCj4+Pj4+Pj4+
Pj4+IHRoZSBuZXcgY29udGV4dCBpcyBlc3RhYmxpc2hlZCBhbmQgdGhlbiBSRU5FVyB1c2luZyBp
bnRlZ3JpdHkgc2VydmljZQ0KPj4+Pj4+Pj4+Pj4gaXMgcmV0cmllZCB3aXRoIHRoZSBuZXcgY29u
dGV4dCB3aGljaCBnZXRzIEVSUl9TVEFMRV9DTElFTlRJRCB3aGljaA0KPj4+Pj4+Pj4+Pj4gdGhl
biBjbGllbnQgcmVjb3ZlcnMgZnJvbS4gSWYgaXQncyBhbiBvcGVyYXRpb24gKEkgaGF2ZSBhIEdF
VEFUVFIpDQo+Pj4+Pj4+Pj4+PiB0aGF0IGdldHMgQVVUSF9FUlIsIHRoZW4gaXQgZ2V0cyBuZXcg
Y29udGV4dCBhbmQgaXMgcmV0cmllZCB1c2luZyBub25lDQo+Pj4+Pj4+Pj4+PiBzZXJ2aWNlLiBU
aGVuIFJFTkVXIGdldHMgaXRzIG93biBBVVRIX0VSUiBhcyBpdCB1c2VzIGEgZGlmZmVyZW50DQo+
Pj4+Pj4+Pj4+PiBjb250ZXh0LCBhIG5ldyBjb250ZXh0IGlzIGdvdHRlbiwgUkVORVcgaXMgcmV0
cmllZCBvdmVyIGludGVncml0eSBhbmQNCj4+Pj4+Pj4+Pj4+IGdldHMgRVJSX1NUQUxFX0NMSUVO
VElEIHdoaWNoIGl0IHJlY292ZXJzIGZyb20uDQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBJZiBv
bmUgb3BlcmF0aW9uIGlzIGFsbG93ZWQgdG8gY29tcGxldGUsIHRoZW4NCj4+Pj4+Pj4+Pj4gdGhl
IG90aGVyIHdpbGwgYWx3YXlzIHJlY29nbml6ZSB0aGF0IGFub3RoZXINCj4+Pj4+Pj4+Pj4gZnJl
c2ggR1NTIGNvbnRleHQgaXMgbmVlZGVkLiBJZiB0d28gYXJlIHNlbnQNCj4+Pj4+Pj4+Pj4gYXQg
dGhlIHNhbWUgdGltZSwgdGhleSByYWNlIGFuZCBvbmUgYWx3YXlzDQo+Pj4+Pj4+Pj4+IGZhaWxz
Lg0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gSGVsZW4ncyB0ZXN0IGluY2x1ZGVzIGEgc2Vjb25k
IGlkbGUgbW91bnQgcG9pbnQNCj4+Pj4+Pj4+Pj4gKHNlYz1rcmI1aSkgYW5kIG1heWJlIHRoYXQg
aXMgbmVlZGVkIHRvIHRyaWdnZXINCj4+Pj4+Pj4+Pj4gdGhlIHJhY2U/DQo+Pj4+Pj4+Pj4gDQo+
Pj4+Pj4+Pj4gQ2h1Y2ssIGFueSBjaGFuY2UgdG8gZ2V0ICJycGNkZWJ1ZyAtbSBycGMgYXV0aCIg
b3V0cHV0IGR1cmluZyB0aGUNCj4+Pj4+Pj4+PiBmYWlsdXJlIChnc3NkIG9wdGlvbmFsbHkpIChp
IHJlYWxpemUgdGhhdCBpdCBtaWdodCBhbHRlciB0aGUgdGltaW5ncw0KPj4+Pj4+Pj4+IGFuZCBu
b3QgaGl0IHRoZSBpc3N1ZSBidXQgd29ydGggYSBzaG90KT8NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4g
SSdtIHN1cmUgdGhhdCdzIGZpbmUuIEFuIGludGVybmFsIHRlc3RlciBoaXQgdGhpcywNCj4+Pj4+
Pj4+IG5vdCBhIGN1c3RvbWVyLCBzbyBJIHdpbGwgYXNrLg0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBJ
IGFncmVlLCB0aG91Z2gsIHRoYXQgdGltaW5nIG1pZ2h0IGJlIGEgcHJvYmxlbToNCj4+Pj4+Pj4+
IHRoZXNlIHN5c3RlbXMgYWxsIGhhdmUgcmVhbCBzZXJpYWwgY29uc29sZXMgdmlhDQo+Pj4+Pj4+
PiBpTE9NLCBzbyAvdi9sL20gdHJhZmZpYyBkb2VzIGJyaW5nIGV2ZXJ5dGhpbmcgdG8NCj4+Pj4+
Pj4+IGEgc3RhbmRzdGlsbC4NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gTWVhbndoaWxlLCB3aGF0J3Mg
eW91J3JlIG9waW5pb24gYWJvdXQgQVVUSF9GQUlMRUQ/DQo+Pj4+Pj4+PiBTaG91bGQgdGhlIHNl
cnZlciByZXR1cm4gUlBDU0VDX0dTU19DVFhQUk9CTEVNDQo+Pj4+Pj4+PiBpbiB0aGlzIGNhc2Ug
aW5zdGVhZD8gSWYgaXQgZGlkLCBkbyB5b3UgdGhpbmsNCj4+Pj4+Pj4+IHRoZSBMaW51eCBjbGll
bnQgd291bGQgcmVjb3ZlciBieSBjcmVhdGluZyBhDQo+Pj4+Pj4+PiByZXBsYWNlbWVudCBHU1Mg
Y29udGV4dD8NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IEFoLCB5ZXMsIEkgZXF1YXRlZCBBVVRIX0ZBSUxF
RCBBbmQgQVVUSF9FUlJPUiBpbiBteSBtaW5kLiBJZiBjbGllbnQNCj4+Pj4+Pj4gcmVjZWl2ZXMg
dGhlIHJlYXNvbiBhcyBBVVRIX0ZBSUxFRCBhcyBvcHBvc2UgdG8gQ1RYUFJPQkxFTSBpdCB3aWxs
DQo+Pj4+Pj4+IGZhaWwgd2l0aCBFSU8gZXJyb3IgYW5kIHdpbGwgbm90IHRyeSB0byBjcmVhdGUg
YSBuZXcgR1NTIGNvbnRleHQuIFNvDQo+Pj4+Pj4+IHllcywgSSBiZWxpZXZlIGl0IHdvdWxkIGhl
bHAgaWYgdGhlIHNlcnZlciByZXR1cm5zIGFueSBvZiB0aGUNCj4+Pj4+Pj4gZm9sbG93aW5nIGVy
cm9yczoNCj4+Pj4+Pj4gICAgICAgICAgICAgY2FzZSBSUENfQVVUSF9SRUpFQ1RFRENSRUQ6DQo+
Pj4+Pj4+ICAgICAgICAgICAgIGNhc2UgUlBDX0FVVEhfUkVKRUNURURWRVJGOg0KPj4+Pj4+PiAg
ICAgICAgICAgICBjYXNlIFJQQ1NFQ19HU1NfQ1JFRFBST0JMRU06DQo+Pj4+Pj4+ICAgICAgICAg
ICAgIGNhc2UgUlBDU0VDX0dTU19DVFhQUk9CTEVNOg0KPj4+Pj4+PiANCj4+Pj4+Pj4gdGhlbiB0
aGUgY2xpZW50IHdpbGwgcmVjcmVhdGUgdGhlIGNvbnRleHQuDQo+Pj4+Pj4gDQo+Pj4+Pj4gQWxz
byBpbiBteSB0ZXN0aW5nLCBJIGNhbiBzZWUgdGhhdCBjcmVkZW50aWFsIGNhY2hlIGlzIHBlciBn
c3MgZmxhdm9yLg0KPj4+Pj4+IEp1c3QgdG8gY2hlY2ssIHdoYXQga2VybmVsIHZlcnNpb24gaXMg
dGhpcyBwcm9ibGVtIGVuY291bnRlcmVkIG9uIChJDQo+Pj4+Pj4ga25vdyB5b3Ugc2FpZCB1cHN0
cmVhbSkgYnV0IEkganVzdCB3YW50IHRvIGRvdWJsZSBjaGVjayBzbyB0aGF0IEkgY2FuDQo+Pj4+
Pj4gbG9vayBhdCB0aGUgY29ycmVjdCBzb3VyY2UgY29kZS4NCj4+Pj4+IA0KPj4+Pj4gdjQuMS4x
MiAoc3RhYmxlKSBJIHRoaW5rLg0KPj4+PiANCj4+Pj4gQWxzbywgY2FuIHlvdSBzaGFyZSB0aGUg
bmV0d29yayB0cmFjZT8NCj4+PiANCj4+PiBIaSBDaHVjaywNCj4+PiANCj4+PiBJIHdhcyBmaW5h
bGx5IGFibGUgdG8gcmVwcm9kdWNlIHRoZSBjb25kaXRpb24geW91IHdlcmUgc2VlaW5nIChpLmUu
LA0KPj4+IHRoZSB1c2Ugb2YgdGhlIHNhbWUgY29udGV4dCBmb3IgZGlmZmVyZW50IGdzcyBzZXJ2
aWNlcykuDQo+Pj4gDQo+Pj4gSSBlbmFibGVkIHJwY2RlYnVnIHJwYyBhdXRoIGFuZCBJIGNhbiBz
ZWUgdGhhdCB0aGUgMm5kIHJlcXVlc3QgZW5kcyB1cA0KPj4+IGZpbmRpbmcgYSBnc3NfdXBjYWxs
IG1lc3NhZ2UgYmVjYXVzZSBpdCdzIGp1c3QgbWF0Y2hlZCBieSB0aGUgdWlkLg0KPj4+IFRoZXJl
IGlzIGV2ZW4gYSBjb21tZW50IGluIGF1dGhfZ3NzL2F1dGhfZ3NzLmMgaW4gZ3NzX2FkZF9tc2co
KSBzYXlpbmcNCj4+PiB0aGF0IGlmIHRoZXJlIGlzIHVwY2FsbCBmb3IgYW4gdWlkIHRoZW4gaXQg
d29uJ3QgYWRkIGFub3RoZXIgdXBjYWxsLg0KPj4+IFNvIEkgdGhpbmsgdGhlIGRlY2lzaW9uIGlz
IG1hZGUgcmlnaHQgdGhlcmUgdG8gc2hhcmUgdGhlIHNhbWUgY29udGV4dA0KPj4+IG5vIG1hdHRl
ciB0aGUgZ3NzIHNlcnZpY2UuDQo+PiANCj4+IElmIEkgdW5kZXJzdGFuZCBjb3JyZWN0bHksIHRo
YXQncyBqdXN0IHdoYXQgQW5keSBwcmVkaWN0ZWQuDQo+PiANCj4+IFRoYXQgY2hlY2sgbmVlZHMg
dG8gYmUgY2hhbmdlZCB0byBhbGxvdyBhbm90aGVyIHVwY2FsbCB0byBiZQ0KPj4gcXVldWVkIGlm
IHRoZSBVSUQgbWF0Y2hlcyBidXQgdGhlIEdTUyBzZXJ2aWNlIGRvZXMgbm90Lg0KPiANCj4gWWVz
LCB3ZSBuZWVkIHRvIGVuc3VyZSB0aGF0IHRoZSBzZXJ2aWNlIGlzIHNldCBldmVuIHRob3VnaCBp
dCBpcyBpZ25vcmVkIGJ5IHRoZSBzZXJ2ZXIuDQoNCuKApnNlcnZpY2UgaXMgc2V0IGluIHRoZSBn
c3NfdXBjYWxsX21zZ+KApg0KDQo+IA0KPiDigJQ+QW5keQ0KPiANCj4+IA0KPj4gDQo+PiAtLQ0K
Pj4gQ2h1Y2sgTGV2ZXINCg0K

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-07-29 17:32                         ` Adamson, Andy
@ 2016-07-29 22:24                           ` Olga Kornievskaia
  0 siblings, 0 replies; 25+ messages in thread
From: Olga Kornievskaia @ 2016-07-29 22:24 UTC (permalink / raw)
  To: Adamson, Andy; +Cc: Chuck Lever, Linux NFS Mailing List

On Fri, Jul 29, 2016 at 1:32 PM, Adamson, Andy
<William.Adamson@netapp.com> wrote:
>
>> On Jul 29, 2016, at 1:07 PM, Adamson, Andy <William.Adamson@netapp.com> wrote:
>>
>>>
>>> On Jul 29, 2016, at 12:38 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>
>>>>
>>>> On Jul 29, 2016, at 12:27 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>
>>>> On Mon, Jul 25, 2016 at 2:18 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>> On Thu, Jul 21, 2016 at 5:32 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>
>>>>>>
>>>>>>> On Jul 21, 2016, at 10:46 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>>
>>>>>>>> On Thu, Jul 21, 2016 at 3:54 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>>>> On Thu, Jul 21, 2016 at 1:56 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>>>
>>>>>>>>>> On Jul 21, 2016, at 6:04 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>>>>>
>>>>>>>>>> On Thu, Jul 21, 2016 at 2:55 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> On Jul 20, 2016, at 6:56 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
>>>>>>>>>>>> <William.Adamson@netapp.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi Andy-
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks for taking the time to discuss this with me. I've
>>>>>>>>>>>>>> copied linux-nfs to make this e-mail also an upstream bug
>>>>>>>>>>>>>> report.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> As we saw in the network capture, recovery of GSS contexts
>>>>>>>>>>>>>> after a server reboot fails in certain cases with NFSv4.0
>>>>>>>>>>>>>> and NFSv4.1 mount points.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The reproducer is a simple program that generates one NFS
>>>>>>>>>>>>>> WRITE periodically, run while the server repeatedly reboots
>>>>>>>>>>>>>> (or one cluster head fails over to the other and back). The
>>>>>>>>>>>>>> goal of the reproducer is to identify problems with state
>>>>>>>>>>>>>> recovery without a lot of other I/O going on to clutter up
>>>>>>>>>>>>>> the network capture.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In the failing case, sec=krb5 is specified on the mount
>>>>>>>>>>>>>> point, and the reproducer is run as root. We've found this
>>>>>>>>>>>>>> combination fails with both NFSv4.0 and NFSv4.1.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> At mount time, the client establishes a GSS context for
>>>>>>>>>>>>>> lease management operations, which is bound to the client's
>>>>>>>>>>>>>> NFS service principal and uses GSS service "integrity."
>>>>>>>>>>>>>> Call this GSS context 1.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> When the reproducer starts, a second GSS context is
>>>>>>>>>>>>>> established for NFS operations associated with that user.
>>>>>>>>>>>>>> Since the reproducer is running as root, this context is
>>>>>>>>>>>>>> also bound to the client's NFS service principal, but it
>>>>>>>>>>>>>> uses the GSS service "none" (reflecting the explicit
>>>>>>>>>>>>>> request for "sec=krb5"). Call this GSS context 2.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> After the server reboots, the client re-establishes a TCP
>>>>>>>>>>>>>> connection with the server, and performs a RENEW
>>>>>>>>>>>>>> operation using context 1. Thanks to the server reboot,
>>>>>>>>>>>>>> contexts 1 and 2 are now stale. The server thus rejects
>>>>>>>>>>>>>> the RPC with RPCSEC_GSS_CTXPROBLEM.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
>>>>>>>>>>>>>> NULL operation. Call this GSS context 3.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Interestingly, the client does not resend the RENEW
>>>>>>>>>>>>>> operation at this point (if it did, we wouldn't see this
>>>>>>>>>>>>>> problem at all).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The client then attempts to resume the reproducer workload.
>>>>>>>>>>>>>> It sends an NFSv4 WRITE operation, using the first available
>>>>>>>>>>>>>> GSS context in UID 0's credential cache, which is context 3,
>>>>>>>>>>>>>> already bound to the client's NFS service principal. But GSS
>>>>>>>>>>>>>> service "none" is used for this operation, since it is on
>>>>>>>>>>>>>> behalf of the mount where sec=krb5 was specified.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The RPC is accepted, but the server reports
>>>>>>>>>>>>>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The client responds by attempting state recovery. The
>>>>>>>>>>>>>> first operation it tries is another RENEW. Since this is
>>>>>>>>>>>>>> a lease management operation, the client looks in UID 0's
>>>>>>>>>>>>>> credential cache again and finds the recently established
>>>>>>>>>>>>>> context 3. It tries the RENEW operation using GSS context
>>>>>>>>>>>>>> 3 with GSS service "integrity."
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The server rejects the RENEW RPC with AUTH_FAILED, and
>>>>>>>>>>>>>> the client reports that "check lease failed" and
>>>>>>>>>>>>>> terminates state recovery.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The client re-drives the WRITE operation with the stale
>>>>>>>>>>>>>> stateid with predictable results. The client again tries
>>>>>>>>>>>>>> to recover state by sending a RENEW, and still uses the
>>>>>>>>>>>>>> same GSS context 3 with service "integrity" and gets the
>>>>>>>>>>>>>> same result. A (perhaps slow-motion) STALE_STATEID loop
>>>>>>>>>>>>>> ensues, and the client mount point is deadlocked.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Your analysis was that because the reproducer is run as
>>>>>>>>>>>>>> root, both the reproducer's I/O operations, and lease
>>>>>>>>>>>>>> management operations, attempt to use the same GSS context
>>>>>>>>>>>>>> in UID 0's credential cache, but each uses different GSS
>>>>>>>>>>>>>> services.
>>>>>>>>>>>>>
>>>>>>>>>>>>> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
>>>>>>>>>>>>> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
>>>>>>>>>>>>> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>> The key issue seems to be why, when the mount
>>>>>>>>>>>>>> is first established, the client is correctly able to
>>>>>>>>>>>>>> establish two separate GSS contexts for UID 0; but after
>>>>>>>>>>>>>> a server reboot, the client attempts to use the same GSS
>>>>>>>>>>>>>> context with two different GSS services.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.
>>>>>>>>>>>>
>>>>>>>>>>>> I agree with Andy. It must be a tight race.
>>>>>>>>>>>
>>>>>>>>>>> In one capture I see something like this after
>>>>>>>>>>> the server restarts:
>>>>>>>>>>>
>>>>>>>>>>> SYN
>>>>>>>>>>> SYN, ACK
>>>>>>>>>>> ACK
>>>>>>>>>>> C WRITE
>>>>>>>>>>> C SEQUENCE
>>>>>>>>>>> R WRITE -> CTX_PROBLEM
>>>>>>>>>>> R SEQUENCE -> CTX_PROBLEM
>>>>>>>>>>> C NULL (KRB5_AP_REQ)
>>>>>>>>>>> R NULL (KRB5_AP_REP)
>>>>>>>>>>> C WRITE
>>>>>>>>>>> C SEQUENCE
>>>>>>>>>>> R WRITE -> NFS4ERR_STALE_STATEID
>>>>>>>>>>> R SEQUENCE -> AUTH_FAILED
>>>>>>>>>>>
>>>>>>>>>>> Andy's theory neatly explains this behavior.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> I have tried to reproduce
>>>>>>>>>>>> your scenario and in my tests of rebooting the server all recover
>>>>>>>>>>>> correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
>>>>>>>>>>>> the new context is established and then RENEW using integrity service
>>>>>>>>>>>> is retried with the new context which gets ERR_STALE_CLIENTID which
>>>>>>>>>>>> then client recovers from. If it's an operation (I have a GETATTR)
>>>>>>>>>>>> that gets AUTH_ERR, then it gets new context and is retried using none
>>>>>>>>>>>> service. Then RENEW gets its own AUTH_ERR as it uses a different
>>>>>>>>>>>> context, a new context is gotten, RENEW is retried over integrity and
>>>>>>>>>>>> gets ERR_STALE_CLIENTID which it recovers from.
>>>>>>>>>>>
>>>>>>>>>>> If one operation is allowed to complete, then
>>>>>>>>>>> the other will always recognize that another
>>>>>>>>>>> fresh GSS context is needed. If two are sent
>>>>>>>>>>> at the same time, they race and one always
>>>>>>>>>>> fails.
>>>>>>>>>>>
>>>>>>>>>>> Helen's test includes a second idle mount point
>>>>>>>>>>> (sec=krb5i) and maybe that is needed to trigger
>>>>>>>>>>> the race?
>>>>>>>>>>
>>>>>>>>>> Chuck, any chance to get "rpcdebug -m rpc auth" output during the
>>>>>>>>>> failure (gssd optionally) (i realize that it might alter the timings
>>>>>>>>>> and not hit the issue but worth a shot)?
>>>>>>>>>
>>>>>>>>> I'm sure that's fine. An internal tester hit this,
>>>>>>>>> not a customer, so I will ask.
>>>>>>>>>
>>>>>>>>> I agree, though, that timing might be a problem:
>>>>>>>>> these systems all have real serial consoles via
>>>>>>>>> iLOM, so /v/l/m traffic does bring everything to
>>>>>>>>> a standstill.
>>>>>>>>>
>>>>>>>>> Meanwhile, what's you're opinion about AUTH_FAILED?
>>>>>>>>> Should the server return RPCSEC_GSS_CTXPROBLEM
>>>>>>>>> in this case instead? If it did, do you think
>>>>>>>>> the Linux client would recover by creating a
>>>>>>>>> replacement GSS context?
>>>>>>>>
>>>>>>>> Ah, yes, I equated AUTH_FAILED And AUTH_ERROR in my mind. If client
>>>>>>>> receives the reason as AUTH_FAILED as oppose to CTXPROBLEM it will
>>>>>>>> fail with EIO error and will not try to create a new GSS context. So
>>>>>>>> yes, I believe it would help if the server returns any of the
>>>>>>>> following errors:
>>>>>>>>             case RPC_AUTH_REJECTEDCRED:
>>>>>>>>             case RPC_AUTH_REJECTEDVERF:
>>>>>>>>             case RPCSEC_GSS_CREDPROBLEM:
>>>>>>>>             case RPCSEC_GSS_CTXPROBLEM:
>>>>>>>>
>>>>>>>> then the client will recreate the context.
>>>>>>>
>>>>>>> Also in my testing, I can see that credential cache is per gss flavor.
>>>>>>> Just to check, what kernel version is this problem encountered on (I
>>>>>>> know you said upstream) but I just want to double check so that I can
>>>>>>> look at the correct source code.
>>>>>>
>>>>>> v4.1.12 (stable) I think.
>>>>>
>>>>> Also, can you share the network trace?
>>>>
>>>> Hi Chuck,
>>>>
>>>> I was finally able to reproduce the condition you were seeing (i.e.,
>>>> the use of the same context for different gss services).
>>>>
>>>> I enabled rpcdebug rpc auth and I can see that the 2nd request ends up
>>>> finding a gss_upcall message because it's just matched by the uid.
>>>> There is even a comment in auth_gss/auth_gss.c in gss_add_msg() saying
>>>> that if there is upcall for an uid then it won't add another upcall.
>>>> So I think the decision is made right there to share the same context
>>>> no matter the gss service.
>>>
>>> If I understand correctly, that's just what Andy predicted.
>>>
>>> That check needs to be changed to allow another upcall to be
>>> queued if the UID matches but the GSS service does not.
>>
>> Yes, we need to ensure that the service is set even though it is ignored by the server.
>
> …service is set in the gss_upcall_msg…
>

I believe a change to accommodate this would have compatibility issues.

Currently, when the gssd creates a context and sends the information
back to the kernel, the first thing in the buffer is the uid. The
kernel parses the uid and then looks up the appropriate gss_upcall
message. Right now no service information is there to do the matching.
If we were to add gssd to sends up the service for which the context
was established, then the old kernel won't work with the new
nfs-utils. And new kernel won't work with old nfs-utils.

>>
>> —>Andy
>>
>>>
>>>
>>> --
>>> Chuck Lever
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-07-29 16:38                     ` Chuck Lever
  2016-07-29 17:07                       ` Adamson, Andy
@ 2016-08-02 18:06                       ` J. Bruce Fields
  2016-08-03 18:53                         ` Adamson, Andy
  2016-08-03 19:14                         ` Chuck Lever
  1 sibling, 2 replies; 25+ messages in thread
From: J. Bruce Fields @ 2016-08-02 18:06 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Olga Kornievskaia, Adamson, Andy, Linux NFS Mailing List

On Fri, Jul 29, 2016 at 12:38:34PM -0400, Chuck Lever wrote:
> 
> > On Jul 29, 2016, at 12:27 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> > 
> > On Mon, Jul 25, 2016 at 2:18 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >> On Thu, Jul 21, 2016 at 5:32 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
> >>> 
> >>> 
> >>>> On Jul 21, 2016, at 10:46 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >>>> 
> >>>>> On Thu, Jul 21, 2016 at 3:54 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >>>>>> On Thu, Jul 21, 2016 at 1:56 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
> >>>>>> 
> >>>>>>> On Jul 21, 2016, at 6:04 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >>>>>>> 
> >>>>>>> On Thu, Jul 21, 2016 at 2:55 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
> >>>>>>>> 
> >>>>>>>>> On Jul 20, 2016, at 6:56 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >>>>>>>>> 
> >>>>>>>>> On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
> >>>>>>>>> <William.Adamson@netapp.com> wrote:
> >>>>>>>>>> 
> >>>>>>>>>>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
> >>>>>>>>>>> 
> >>>>>>>>>>> Hi Andy-
> >>>>>>>>>>> 
> >>>>>>>>>>> Thanks for taking the time to discuss this with me. I've
> >>>>>>>>>>> copied linux-nfs to make this e-mail also an upstream bug
> >>>>>>>>>>> report.
> >>>>>>>>>>> 
> >>>>>>>>>>> As we saw in the network capture, recovery of GSS contexts
> >>>>>>>>>>> after a server reboot fails in certain cases with NFSv4.0
> >>>>>>>>>>> and NFSv4.1 mount points.
> >>>>>>>>>>> 
> >>>>>>>>>>> The reproducer is a simple program that generates one NFS
> >>>>>>>>>>> WRITE periodically, run while the server repeatedly reboots
> >>>>>>>>>>> (or one cluster head fails over to the other and back). The
> >>>>>>>>>>> goal of the reproducer is to identify problems with state
> >>>>>>>>>>> recovery without a lot of other I/O going on to clutter up
> >>>>>>>>>>> the network capture.
> >>>>>>>>>>> 
> >>>>>>>>>>> In the failing case, sec=krb5 is specified on the mount
> >>>>>>>>>>> point, and the reproducer is run as root. We've found this
> >>>>>>>>>>> combination fails with both NFSv4.0 and NFSv4.1.
> >>>>>>>>>>> 
> >>>>>>>>>>> At mount time, the client establishes a GSS context for
> >>>>>>>>>>> lease management operations, which is bound to the client's
> >>>>>>>>>>> NFS service principal and uses GSS service "integrity."
> >>>>>>>>>>> Call this GSS context 1.
> >>>>>>>>>>> 
> >>>>>>>>>>> When the reproducer starts, a second GSS context is
> >>>>>>>>>>> established for NFS operations associated with that user.
> >>>>>>>>>>> Since the reproducer is running as root, this context is
> >>>>>>>>>>> also bound to the client's NFS service principal, but it
> >>>>>>>>>>> uses the GSS service "none" (reflecting the explicit
> >>>>>>>>>>> request for "sec=krb5"). Call this GSS context 2.
> >>>>>>>>>>> 
> >>>>>>>>>>> After the server reboots, the client re-establishes a TCP
> >>>>>>>>>>> connection with the server, and performs a RENEW
> >>>>>>>>>>> operation using context 1. Thanks to the server reboot,
> >>>>>>>>>>> contexts 1 and 2 are now stale. The server thus rejects
> >>>>>>>>>>> the RPC with RPCSEC_GSS_CTXPROBLEM.
> >>>>>>>>>>> 
> >>>>>>>>>>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
> >>>>>>>>>>> NULL operation. Call this GSS context 3.
> >>>>>>>>>>> 
> >>>>>>>>>>> Interestingly, the client does not resend the RENEW
> >>>>>>>>>>> operation at this point (if it did, we wouldn't see this
> >>>>>>>>>>> problem at all).
> >>>>>>>>>>> 
> >>>>>>>>>>> The client then attempts to resume the reproducer workload.
> >>>>>>>>>>> It sends an NFSv4 WRITE operation, using the first available
> >>>>>>>>>>> GSS context in UID 0's credential cache, which is context 3,
> >>>>>>>>>>> already bound to the client's NFS service principal. But GSS
> >>>>>>>>>>> service "none" is used for this operation, since it is on
> >>>>>>>>>>> behalf of the mount where sec=krb5 was specified.
> >>>>>>>>>>> 
> >>>>>>>>>>> The RPC is accepted, but the server reports
> >>>>>>>>>>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
> >>>>>>>>>>> 
> >>>>>>>>>>> The client responds by attempting state recovery. The
> >>>>>>>>>>> first operation it tries is another RENEW. Since this is
> >>>>>>>>>>> a lease management operation, the client looks in UID 0's
> >>>>>>>>>>> credential cache again and finds the recently established
> >>>>>>>>>>> context 3. It tries the RENEW operation using GSS context
> >>>>>>>>>>> 3 with GSS service "integrity."
> >>>>>>>>>>> 
> >>>>>>>>>>> The server rejects the RENEW RPC with AUTH_FAILED, and
> >>>>>>>>>>> the client reports that "check lease failed" and
> >>>>>>>>>>> terminates state recovery.
> >>>>>>>>>>> 
> >>>>>>>>>>> The client re-drives the WRITE operation with the stale
> >>>>>>>>>>> stateid with predictable results. The client again tries
> >>>>>>>>>>> to recover state by sending a RENEW, and still uses the
> >>>>>>>>>>> same GSS context 3 with service "integrity" and gets the
> >>>>>>>>>>> same result. A (perhaps slow-motion) STALE_STATEID loop
> >>>>>>>>>>> ensues, and the client mount point is deadlocked.
> >>>>>>>>>>> 
> >>>>>>>>>>> Your analysis was that because the reproducer is run as
> >>>>>>>>>>> root, both the reproducer's I/O operations, and lease
> >>>>>>>>>>> management operations, attempt to use the same GSS context
> >>>>>>>>>>> in UID 0's credential cache, but each uses different GSS
> >>>>>>>>>>> services.
> >>>>>>>>>> 
> >>>>>>>>>> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
> >>>>>>>>>> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
> >>>>>>>>>> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
> >>>>>>>>>> 
> >>>>>>>>>> 
> >>>>>>>>>>> The key issue seems to be why, when the mount
> >>>>>>>>>>> is first established, the client is correctly able to
> >>>>>>>>>>> establish two separate GSS contexts for UID 0; but after
> >>>>>>>>>>> a server reboot, the client attempts to use the same GSS
> >>>>>>>>>>> context with two different GSS services.
> >>>>>>>>>> 
> >>>>>>>>>> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.
> >>>>>>>>> 
> >>>>>>>>> I agree with Andy. It must be a tight race.
> >>>>>>>> 
> >>>>>>>> In one capture I see something like this after
> >>>>>>>> the server restarts:
> >>>>>>>> 
> >>>>>>>> SYN
> >>>>>>>> SYN, ACK
> >>>>>>>> ACK
> >>>>>>>> C WRITE
> >>>>>>>> C SEQUENCE
> >>>>>>>> R WRITE -> CTX_PROBLEM
> >>>>>>>> R SEQUENCE -> CTX_PROBLEM
> >>>>>>>> C NULL (KRB5_AP_REQ)
> >>>>>>>> R NULL (KRB5_AP_REP)
> >>>>>>>> C WRITE
> >>>>>>>> C SEQUENCE
> >>>>>>>> R WRITE -> NFS4ERR_STALE_STATEID
> >>>>>>>> R SEQUENCE -> AUTH_FAILED
> >>>>>>>> 
> >>>>>>>> Andy's theory neatly explains this behavior.
> >>>>>>>> 
> >>>>>>>> 
> >>>>>>>>> I have tried to reproduce
> >>>>>>>>> your scenario and in my tests of rebooting the server all recover
> >>>>>>>>> correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
> >>>>>>>>> the new context is established and then RENEW using integrity service
> >>>>>>>>> is retried with the new context which gets ERR_STALE_CLIENTID which
> >>>>>>>>> then client recovers from. If it's an operation (I have a GETATTR)
> >>>>>>>>> that gets AUTH_ERR, then it gets new context and is retried using none
> >>>>>>>>> service. Then RENEW gets its own AUTH_ERR as it uses a different
> >>>>>>>>> context, a new context is gotten, RENEW is retried over integrity and
> >>>>>>>>> gets ERR_STALE_CLIENTID which it recovers from.
> >>>>>>>> 
> >>>>>>>> If one operation is allowed to complete, then
> >>>>>>>> the other will always recognize that another
> >>>>>>>> fresh GSS context is needed. If two are sent
> >>>>>>>> at the same time, they race and one always
> >>>>>>>> fails.
> >>>>>>>> 
> >>>>>>>> Helen's test includes a second idle mount point
> >>>>>>>> (sec=krb5i) and maybe that is needed to trigger
> >>>>>>>> the race?
> >>>>>>> 
> >>>>>>> Chuck, any chance to get "rpcdebug -m rpc auth" output during the
> >>>>>>> failure (gssd optionally) (i realize that it might alter the timings
> >>>>>>> and not hit the issue but worth a shot)?
> >>>>>> 
> >>>>>> I'm sure that's fine. An internal tester hit this,
> >>>>>> not a customer, so I will ask.
> >>>>>> 
> >>>>>> I agree, though, that timing might be a problem:
> >>>>>> these systems all have real serial consoles via
> >>>>>> iLOM, so /v/l/m traffic does bring everything to
> >>>>>> a standstill.
> >>>>>> 
> >>>>>> Meanwhile, what's you're opinion about AUTH_FAILED?
> >>>>>> Should the server return RPCSEC_GSS_CTXPROBLEM
> >>>>>> in this case instead? If it did, do you think
> >>>>>> the Linux client would recover by creating a
> >>>>>> replacement GSS context?
> >>>>> 
> >>>>> Ah, yes, I equated AUTH_FAILED And AUTH_ERROR in my mind. If client
> >>>>> receives the reason as AUTH_FAILED as oppose to CTXPROBLEM it will
> >>>>> fail with EIO error and will not try to create a new GSS context. So
> >>>>> yes, I believe it would help if the server returns any of the
> >>>>> following errors:
> >>>>>               case RPC_AUTH_REJECTEDCRED:
> >>>>>               case RPC_AUTH_REJECTEDVERF:
> >>>>>               case RPCSEC_GSS_CREDPROBLEM:
> >>>>>               case RPCSEC_GSS_CTXPROBLEM:
> >>>>> 
> >>>>> then the client will recreate the context.
> >>>> 
> >>>> Also in my testing, I can see that credential cache is per gss flavor.
> >>>> Just to check, what kernel version is this problem encountered on (I
> >>>> know you said upstream) but I just want to double check so that I can
> >>>> look at the correct source code.
> >>> 
> >>> v4.1.12 (stable) I think.
> >> 
> >> Also, can you share the network trace?
> > 
> > Hi Chuck,
> > 
> > I was finally able to reproduce the condition you were seeing (i.e.,
> > the use of the same context for different gss services).
> > 
> > I enabled rpcdebug rpc auth and I can see that the 2nd request ends up
> > finding a gss_upcall message because it's just matched by the uid.
> > There is even a comment in auth_gss/auth_gss.c in gss_add_msg() saying
> > that if there is upcall for an uid then it won't add another upcall.
> > So I think the decision is made right there to share the same context
> > no matter the gss service.
> 
> If I understand correctly, that's just what Andy predicted.
> 
> That check needs to be changed to allow another upcall to be
> queued if the UID matches but the GSS service does not.

You should be able to use the same context with different services.

Apologies, I haven't caught up with the whole discussion above, this one
point just jumped out at me.  If you're trying to request a whole new
gss context just so you can use, e.g., integrity instead of privacy,
then something's wrong.

--b.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-08-02 18:06                       ` J. Bruce Fields
@ 2016-08-03 18:53                         ` Adamson, Andy
  2016-08-03 19:56                           ` Olga Kornievskaia
  2016-08-03 19:14                         ` Chuck Lever
  1 sibling, 1 reply; 25+ messages in thread
From: Adamson, Andy @ 2016-08-03 18:53 UTC (permalink / raw)
  To: J. Bruce Fields
  Cc: Chuck Lever, Olga Kornievskaia, Adamson, Andy, Linux NFS Mailing List

DQo+IE9uIEF1ZyAyLCAyMDE2LCBhdCAyOjA2IFBNLCBKLiBCcnVjZSBGaWVsZHMgPGJmaWVsZHNA
ZmllbGRzZXMub3JnPiB3cm90ZToNCj4gDQo+IE9uIEZyaSwgSnVsIDI5LCAyMDE2IGF0IDEyOjM4
OjM0UE0gLTA0MDAsIENodWNrIExldmVyIHdyb3RlOg0KPj4gDQo+Pj4gT24gSnVsIDI5LCAyMDE2
LCBhdCAxMjoyNyBQTSwgT2xnYSBLb3JuaWV2c2thaWEgPGFnbG9AdW1pY2guZWR1PiB3cm90ZToN
Cj4+PiANCj4+PiBPbiBNb24sIEp1bCAyNSwgMjAxNiBhdCAyOjE4IFBNLCBPbGdhIEtvcm5pZXZz
a2FpYSA8YWdsb0B1bWljaC5lZHU+IHdyb3RlOg0KPj4+PiBPbiBUaHUsIEp1bCAyMSwgMjAxNiBh
dCA1OjMyIFBNLCBDaHVjayBMZXZlciA8Y2h1Y2subGV2ZXJAb3JhY2xlLmNvbT4gd3JvdGU6DQo+
Pj4+PiANCj4+Pj4+IA0KPj4+Pj4+IE9uIEp1bCAyMSwgMjAxNiwgYXQgMTA6NDYgUE0sIE9sZ2Eg
S29ybmlldnNrYWlhIDxhZ2xvQHVtaWNoLmVkdT4gd3JvdGU6DQo+Pj4+Pj4gDQo+Pj4+Pj4+IE9u
IFRodSwgSnVsIDIxLCAyMDE2IGF0IDM6NTQgUE0sIE9sZ2EgS29ybmlldnNrYWlhIDxhZ2xvQHVt
aWNoLmVkdT4gd3JvdGU6DQo+Pj4+Pj4+PiBPbiBUaHUsIEp1bCAyMSwgMjAxNiBhdCAxOjU2IFBN
LCBDaHVjayBMZXZlciA8Y2h1Y2subGV2ZXJAb3JhY2xlLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+PiBPbiBKdWwgMjEsIDIwMTYsIGF0IDY6MDQgUE0sIE9sZ2EgS29ybmlldnNrYWlh
IDxhZ2xvQHVtaWNoLmVkdT4gd3JvdGU6DQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gT24gVGh1LCBK
dWwgMjEsIDIwMTYgYXQgMjo1NSBBTSwgQ2h1Y2sgTGV2ZXIgPGNodWNrLmxldmVyQG9yYWNsZS5j
b20+IHdyb3RlOg0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+IE9uIEp1bCAyMCwgMjAxNiwgYXQg
Njo1NiBQTSwgT2xnYSBLb3JuaWV2c2thaWEgPGFnbG9AdW1pY2guZWR1PiB3cm90ZToNCj4+Pj4+
Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4gT24gV2VkLCBKdWwgMjAsIDIwMTYgYXQgNToxNCBBTSwgQWRh
bXNvbiwgQW5keQ0KPj4+Pj4+Pj4+Pj4gPFdpbGxpYW0uQWRhbXNvbkBuZXRhcHAuY29tPiB3cm90
ZToNCj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4gT24gSnVsIDE5LCAyMDE2LCBhdCAxMDo1
MSBBTSwgQ2h1Y2sgTGV2ZXIgPGNodWNrLmxldmVyQG9yYWNsZS5jb20+IHdyb3RlOg0KPj4+Pj4+
Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4gSGkgQW5keS0NCj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+
Pj4+Pj4+IFRoYW5rcyBmb3IgdGFraW5nIHRoZSB0aW1lIHRvIGRpc2N1c3MgdGhpcyB3aXRoIG1l
LiBJJ3ZlDQo+Pj4+Pj4+Pj4+Pj4+IGNvcGllZCBsaW51eC1uZnMgdG8gbWFrZSB0aGlzIGUtbWFp
bCBhbHNvIGFuIHVwc3RyZWFtIGJ1Zw0KPj4+Pj4+Pj4+Pj4+PiByZXBvcnQuDQo+Pj4+Pj4+Pj4+
Pj4+IA0KPj4+Pj4+Pj4+Pj4+PiBBcyB3ZSBzYXcgaW4gdGhlIG5ldHdvcmsgY2FwdHVyZSwgcmVj
b3Zlcnkgb2YgR1NTIGNvbnRleHRzDQo+Pj4+Pj4+Pj4+Pj4+IGFmdGVyIGEgc2VydmVyIHJlYm9v
dCBmYWlscyBpbiBjZXJ0YWluIGNhc2VzIHdpdGggTkZTdjQuMA0KPj4+Pj4+Pj4+Pj4+PiBhbmQg
TkZTdjQuMSBtb3VudCBwb2ludHMuDQo+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+PiBUaGUg
cmVwcm9kdWNlciBpcyBhIHNpbXBsZSBwcm9ncmFtIHRoYXQgZ2VuZXJhdGVzIG9uZSBORlMNCj4+
Pj4+Pj4+Pj4+Pj4gV1JJVEUgcGVyaW9kaWNhbGx5LCBydW4gd2hpbGUgdGhlIHNlcnZlciByZXBl
YXRlZGx5IHJlYm9vdHMNCj4+Pj4+Pj4+Pj4+Pj4gKG9yIG9uZSBjbHVzdGVyIGhlYWQgZmFpbHMg
b3ZlciB0byB0aGUgb3RoZXIgYW5kIGJhY2spLiBUaGUNCj4+Pj4+Pj4+Pj4+Pj4gZ29hbCBvZiB0
aGUgcmVwcm9kdWNlciBpcyB0byBpZGVudGlmeSBwcm9ibGVtcyB3aXRoIHN0YXRlDQo+Pj4+Pj4+
Pj4+Pj4+IHJlY292ZXJ5IHdpdGhvdXQgYSBsb3Qgb2Ygb3RoZXIgSS9PIGdvaW5nIG9uIHRvIGNs
dXR0ZXIgdXANCj4+Pj4+Pj4+Pj4+Pj4gdGhlIG5ldHdvcmsgY2FwdHVyZS4NCj4+Pj4+Pj4+Pj4+
Pj4gDQo+Pj4+Pj4+Pj4+Pj4+IEluIHRoZSBmYWlsaW5nIGNhc2UsIHNlYz1rcmI1IGlzIHNwZWNp
ZmllZCBvbiB0aGUgbW91bnQNCj4+Pj4+Pj4+Pj4+Pj4gcG9pbnQsIGFuZCB0aGUgcmVwcm9kdWNl
ciBpcyBydW4gYXMgcm9vdC4gV2UndmUgZm91bmQgdGhpcw0KPj4+Pj4+Pj4+Pj4+PiBjb21iaW5h
dGlvbiBmYWlscyB3aXRoIGJvdGggTkZTdjQuMCBhbmQgTkZTdjQuMS4NCj4+Pj4+Pj4+Pj4+Pj4g
DQo+Pj4+Pj4+Pj4+Pj4+IEF0IG1vdW50IHRpbWUsIHRoZSBjbGllbnQgZXN0YWJsaXNoZXMgYSBH
U1MgY29udGV4dCBmb3INCj4+Pj4+Pj4+Pj4+Pj4gbGVhc2UgbWFuYWdlbWVudCBvcGVyYXRpb25z
LCB3aGljaCBpcyBib3VuZCB0byB0aGUgY2xpZW50J3MNCj4+Pj4+Pj4+Pj4+Pj4gTkZTIHNlcnZp
Y2UgcHJpbmNpcGFsIGFuZCB1c2VzIEdTUyBzZXJ2aWNlICJpbnRlZ3JpdHkuIg0KPj4+Pj4+Pj4+
Pj4+PiBDYWxsIHRoaXMgR1NTIGNvbnRleHQgMS4NCj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+
Pj4+IFdoZW4gdGhlIHJlcHJvZHVjZXIgc3RhcnRzLCBhIHNlY29uZCBHU1MgY29udGV4dCBpcw0K
Pj4+Pj4+Pj4+Pj4+PiBlc3RhYmxpc2hlZCBmb3IgTkZTIG9wZXJhdGlvbnMgYXNzb2NpYXRlZCB3
aXRoIHRoYXQgdXNlci4NCj4+Pj4+Pj4+Pj4+Pj4gU2luY2UgdGhlIHJlcHJvZHVjZXIgaXMgcnVu
bmluZyBhcyByb290LCB0aGlzIGNvbnRleHQgaXMNCj4+Pj4+Pj4+Pj4+Pj4gYWxzbyBib3VuZCB0
byB0aGUgY2xpZW50J3MgTkZTIHNlcnZpY2UgcHJpbmNpcGFsLCBidXQgaXQNCj4+Pj4+Pj4+Pj4+
Pj4gdXNlcyB0aGUgR1NTIHNlcnZpY2UgIm5vbmUiIChyZWZsZWN0aW5nIHRoZSBleHBsaWNpdA0K
Pj4+Pj4+Pj4+Pj4+PiByZXF1ZXN0IGZvciAic2VjPWtyYjUiKS4gQ2FsbCB0aGlzIEdTUyBjb250
ZXh0IDIuDQo+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+PiBBZnRlciB0aGUgc2VydmVyIHJl
Ym9vdHMsIHRoZSBjbGllbnQgcmUtZXN0YWJsaXNoZXMgYSBUQ1ANCj4+Pj4+Pj4+Pj4+Pj4gY29u
bmVjdGlvbiB3aXRoIHRoZSBzZXJ2ZXIsIGFuZCBwZXJmb3JtcyBhIFJFTkVXDQo+Pj4+Pj4+Pj4+
Pj4+IG9wZXJhdGlvbiB1c2luZyBjb250ZXh0IDEuIFRoYW5rcyB0byB0aGUgc2VydmVyIHJlYm9v
dCwNCj4+Pj4+Pj4+Pj4+Pj4gY29udGV4dHMgMSBhbmQgMiBhcmUgbm93IHN0YWxlLiBUaGUgc2Vy
dmVyIHRodXMgcmVqZWN0cw0KPj4+Pj4+Pj4+Pj4+PiB0aGUgUlBDIHdpdGggUlBDU0VDX0dTU19D
VFhQUk9CTEVNLg0KPj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4gVGhlIGNsaWVudCBwZXJm
b3JtcyBhIEdTU19JTklUX1NFQ19DT05URVhUIHZpYSBhbiBORlN2NA0KPj4+Pj4+Pj4+Pj4+PiBO
VUxMIG9wZXJhdGlvbi4gQ2FsbCB0aGlzIEdTUyBjb250ZXh0IDMuDQo+Pj4+Pj4+Pj4+Pj4+IA0K
Pj4+Pj4+Pj4+Pj4+PiBJbnRlcmVzdGluZ2x5LCB0aGUgY2xpZW50IGRvZXMgbm90IHJlc2VuZCB0
aGUgUkVORVcNCj4+Pj4+Pj4+Pj4+Pj4gb3BlcmF0aW9uIGF0IHRoaXMgcG9pbnQgKGlmIGl0IGRp
ZCwgd2Ugd291bGRuJ3Qgc2VlIHRoaXMNCj4+Pj4+Pj4+Pj4+Pj4gcHJvYmxlbSBhdCBhbGwpLg0K
Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4gVGhlIGNsaWVudCB0aGVuIGF0dGVtcHRzIHRv
IHJlc3VtZSB0aGUgcmVwcm9kdWNlciB3b3JrbG9hZC4NCj4+Pj4+Pj4+Pj4+Pj4gSXQgc2VuZHMg
YW4gTkZTdjQgV1JJVEUgb3BlcmF0aW9uLCB1c2luZyB0aGUgZmlyc3QgYXZhaWxhYmxlDQo+Pj4+
Pj4+Pj4+Pj4+IEdTUyBjb250ZXh0IGluIFVJRCAwJ3MgY3JlZGVudGlhbCBjYWNoZSwgd2hpY2gg
aXMgY29udGV4dCAzLA0KPj4+Pj4+Pj4+Pj4+PiBhbHJlYWR5IGJvdW5kIHRvIHRoZSBjbGllbnQn
cyBORlMgc2VydmljZSBwcmluY2lwYWwuIEJ1dCBHU1MNCj4+Pj4+Pj4+Pj4+Pj4gc2VydmljZSAi
bm9uZSIgaXMgdXNlZCBmb3IgdGhpcyBvcGVyYXRpb24sIHNpbmNlIGl0IGlzIG9uDQo+Pj4+Pj4+
Pj4+Pj4+IGJlaGFsZiBvZiB0aGUgbW91bnQgd2hlcmUgc2VjPWtyYjUgd2FzIHNwZWNpZmllZC4N
Cj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+IFRoZSBSUEMgaXMgYWNjZXB0ZWQsIGJ1dCB0
aGUgc2VydmVyIHJlcG9ydHMNCj4+Pj4+Pj4+Pj4+Pj4gTkZTNEVSUl9TVEFMRV9TVEFURUlELCBz
aW5jZSBpdCBoYXMgcmVjZW50bHkgcmVib290ZWQuDQo+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+
Pj4+PiBUaGUgY2xpZW50IHJlc3BvbmRzIGJ5IGF0dGVtcHRpbmcgc3RhdGUgcmVjb3ZlcnkuIFRo
ZQ0KPj4+Pj4+Pj4+Pj4+PiBmaXJzdCBvcGVyYXRpb24gaXQgdHJpZXMgaXMgYW5vdGhlciBSRU5F
Vy4gU2luY2UgdGhpcyBpcw0KPj4+Pj4+Pj4+Pj4+PiBhIGxlYXNlIG1hbmFnZW1lbnQgb3BlcmF0
aW9uLCB0aGUgY2xpZW50IGxvb2tzIGluIFVJRCAwJ3MNCj4+Pj4+Pj4+Pj4+Pj4gY3JlZGVudGlh
bCBjYWNoZSBhZ2FpbiBhbmQgZmluZHMgdGhlIHJlY2VudGx5IGVzdGFibGlzaGVkDQo+Pj4+Pj4+
Pj4+Pj4+IGNvbnRleHQgMy4gSXQgdHJpZXMgdGhlIFJFTkVXIG9wZXJhdGlvbiB1c2luZyBHU1Mg
Y29udGV4dA0KPj4+Pj4+Pj4+Pj4+PiAzIHdpdGggR1NTIHNlcnZpY2UgImludGVncml0eS4iDQo+
Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+PiBUaGUgc2VydmVyIHJlamVjdHMgdGhlIFJFTkVX
IFJQQyB3aXRoIEFVVEhfRkFJTEVELCBhbmQNCj4+Pj4+Pj4+Pj4+Pj4gdGhlIGNsaWVudCByZXBv
cnRzIHRoYXQgImNoZWNrIGxlYXNlIGZhaWxlZCIgYW5kDQo+Pj4+Pj4+Pj4+Pj4+IHRlcm1pbmF0
ZXMgc3RhdGUgcmVjb3ZlcnkuDQo+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+PiBUaGUgY2xp
ZW50IHJlLWRyaXZlcyB0aGUgV1JJVEUgb3BlcmF0aW9uIHdpdGggdGhlIHN0YWxlDQo+Pj4+Pj4+
Pj4+Pj4+IHN0YXRlaWQgd2l0aCBwcmVkaWN0YWJsZSByZXN1bHRzLiBUaGUgY2xpZW50IGFnYWlu
IHRyaWVzDQo+Pj4+Pj4+Pj4+Pj4+IHRvIHJlY292ZXIgc3RhdGUgYnkgc2VuZGluZyBhIFJFTkVX
LCBhbmQgc3RpbGwgdXNlcyB0aGUNCj4+Pj4+Pj4+Pj4+Pj4gc2FtZSBHU1MgY29udGV4dCAzIHdp
dGggc2VydmljZSAiaW50ZWdyaXR5IiBhbmQgZ2V0cyB0aGUNCj4+Pj4+Pj4+Pj4+Pj4gc2FtZSBy
ZXN1bHQuIEEgKHBlcmhhcHMgc2xvdy1tb3Rpb24pIFNUQUxFX1NUQVRFSUQgbG9vcA0KPj4+Pj4+
Pj4+Pj4+PiBlbnN1ZXMsIGFuZCB0aGUgY2xpZW50IG1vdW50IHBvaW50IGlzIGRlYWRsb2NrZWQu
DQo+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+PiBZb3VyIGFuYWx5c2lzIHdhcyB0aGF0IGJl
Y2F1c2UgdGhlIHJlcHJvZHVjZXIgaXMgcnVuIGFzDQo+Pj4+Pj4+Pj4+Pj4+IHJvb3QsIGJvdGgg
dGhlIHJlcHJvZHVjZXIncyBJL08gb3BlcmF0aW9ucywgYW5kIGxlYXNlDQo+Pj4+Pj4+Pj4+Pj4+
IG1hbmFnZW1lbnQgb3BlcmF0aW9ucywgYXR0ZW1wdCB0byB1c2UgdGhlIHNhbWUgR1NTIGNvbnRl
eHQNCj4+Pj4+Pj4+Pj4+Pj4gaW4gVUlEIDAncyBjcmVkZW50aWFsIGNhY2hlLCBidXQgZWFjaCB1
c2VzIGRpZmZlcmVudCBHU1MNCj4+Pj4+Pj4+Pj4+Pj4gc2VydmljZXMuDQo+Pj4+Pj4+Pj4+Pj4g
DQo+Pj4+Pj4+Pj4+Pj4gQXMgUkZDMjIwMyBzdGF0ZXMsICJJbiBhIGNyZWF0aW9uIHJlcXVlc3Qs
IHRoZSBzZXFfbnVtIGFuZCBzZXJ2aWNlIGZpZWxkcyBhcmUgdW5kZWZpbmVkIGFuZCBib3RoIG11
c3QgYmUgaWdub3JlZCBieSB0aGUgc2VydmVy4oCdDQo+Pj4+Pj4+Pj4+Pj4gU28gYSBjb250ZXh0
IGNyZWF0aW9uIHJlcXVlc3Qgd2hpbGUga2lja2VkIG9mZiBieSBhbiBvcGVyYXRpb24gd2l0aCBh
IHNlcnZpY2UgYXR0YWNoZWQgKGUuZy4gV1JJVEUgdXNlcyBycGNfZ3NzX3N2Y19ub25lIGFuZCBS
RU5FVyB1c2VzIHJwY19nc3Nfc3ZjX2ludGVncml0eSksIGNhbiBiZSB1c2VkIGJ5IGVpdGhlciBz
ZXJ2aWNlIGxldmVsLg0KPj4+Pj4+Pj4+Pj4+IEFGQUlDUyBhIHNpbmdsZSBHU1MgY29udGV4dCBj
b3VsZCBpbiB0aGVvcnkgYmUgdXNlZCBmb3IgYWxsIHNlcnZpY2UgbGV2ZWxzLCBidXQgaW4gcHJh
Y3RpY2UsIEdTUyBjb250ZXh0cyBhcmUgcmVzdHJpY3RlZCB0byBhIHNlcnZpY2UgbGV2ZWwgKGJ5
IGNsaWVudD8gYnkgc2VydmVyPyApIG9uY2UgdGhleSBhcmUgdXNlZC4NCj4+Pj4+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4gVGhlIGtleSBpc3N1ZSBzZWVtcyB0byBiZSB3
aHksIHdoZW4gdGhlIG1vdW50DQo+Pj4+Pj4+Pj4+Pj4+IGlzIGZpcnN0IGVzdGFibGlzaGVkLCB0
aGUgY2xpZW50IGlzIGNvcnJlY3RseSBhYmxlIHRvDQo+Pj4+Pj4+Pj4+Pj4+IGVzdGFibGlzaCB0
d28gc2VwYXJhdGUgR1NTIGNvbnRleHRzIGZvciBVSUQgMDsgYnV0IGFmdGVyDQo+Pj4+Pj4+Pj4+
Pj4+IGEgc2VydmVyIHJlYm9vdCwgdGhlIGNsaWVudCBhdHRlbXB0cyB0byB1c2UgdGhlIHNhbWUg
R1NTDQo+Pj4+Pj4+Pj4+Pj4+IGNvbnRleHQgd2l0aCB0d28gZGlmZmVyZW50IEdTUyBzZXJ2aWNl
cy4NCj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+PiBJIHNwZWN1bGF0ZSB0aGF0IGl0IGlzIGEg
cmFjZSBiZXR3ZWVuIHRoZSBXUklURSBhbmQgdGhlIFJFTkVXIHRvIHVzZSB0aGUgc2FtZSBuZXds
eSBjcmVhdGVkIEdTUyBjb250ZXh0IHRoYXQgaGFzIG5vdCBiZWVuIHVzZWQgeWV0LCBhbmQgc28g
aGFzIG5vIGFzc2lnbmVkIHNlcnZpY2UgbGV2ZWwsIGFuZCB0aGUgdHdvIHJlcXVlc3RzIHJhY2Ug
dG8gc2V0IHRoZSBzZXJ2aWNlIGxldmVsLg0KPj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+PiBJIGFn
cmVlIHdpdGggQW5keS4gSXQgbXVzdCBiZSBhIHRpZ2h0IHJhY2UuDQo+Pj4+Pj4+Pj4+IA0KPj4+
Pj4+Pj4+PiBJbiBvbmUgY2FwdHVyZSBJIHNlZSBzb21ldGhpbmcgbGlrZSB0aGlzIGFmdGVyDQo+
Pj4+Pj4+Pj4+IHRoZSBzZXJ2ZXIgcmVzdGFydHM6DQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBT
WU4NCj4+Pj4+Pj4+Pj4gU1lOLCBBQ0sNCj4+Pj4+Pj4+Pj4gQUNLDQo+Pj4+Pj4+Pj4+IEMgV1JJ
VEUNCj4+Pj4+Pj4+Pj4gQyBTRVFVRU5DRQ0KPj4+Pj4+Pj4+PiBSIFdSSVRFIC0+IENUWF9QUk9C
TEVNDQo+Pj4+Pj4+Pj4+IFIgU0VRVUVOQ0UgLT4gQ1RYX1BST0JMRU0NCj4+Pj4+Pj4+Pj4gQyBO
VUxMIChLUkI1X0FQX1JFUSkNCj4+Pj4+Pj4+Pj4gUiBOVUxMIChLUkI1X0FQX1JFUCkNCj4+Pj4+
Pj4+Pj4gQyBXUklURQ0KPj4+Pj4+Pj4+PiBDIFNFUVVFTkNFDQo+Pj4+Pj4+Pj4+IFIgV1JJVEUg
LT4gTkZTNEVSUl9TVEFMRV9TVEFURUlEDQo+Pj4+Pj4+Pj4+IFIgU0VRVUVOQ0UgLT4gQVVUSF9G
QUlMRUQNCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+IEFuZHkncyB0aGVvcnkgbmVhdGx5IGV4cGxh
aW5zIHRoaXMgYmVoYXZpb3IuDQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+
IEkgaGF2ZSB0cmllZCB0byByZXByb2R1Y2UNCj4+Pj4+Pj4+Pj4+IHlvdXIgc2NlbmFyaW8gYW5k
IGluIG15IHRlc3RzIG9mIHJlYm9vdGluZyB0aGUgc2VydmVyIGFsbCByZWNvdmVyDQo+Pj4+Pj4+
Pj4+PiBjb3JyZWN0bHkuIEluIG15IGNhc2UsIGlmIFJFTkVXIHdhcyB0aGUgb25lIGhpdHRpbmcg
dGhlIEFVVEhfRVJSIHRoZW4NCj4+Pj4+Pj4+Pj4+IHRoZSBuZXcgY29udGV4dCBpcyBlc3RhYmxp
c2hlZCBhbmQgdGhlbiBSRU5FVyB1c2luZyBpbnRlZ3JpdHkgc2VydmljZQ0KPj4+Pj4+Pj4+Pj4g
aXMgcmV0cmllZCB3aXRoIHRoZSBuZXcgY29udGV4dCB3aGljaCBnZXRzIEVSUl9TVEFMRV9DTElF
TlRJRCB3aGljaA0KPj4+Pj4+Pj4+Pj4gdGhlbiBjbGllbnQgcmVjb3ZlcnMgZnJvbS4gSWYgaXQn
cyBhbiBvcGVyYXRpb24gKEkgaGF2ZSBhIEdFVEFUVFIpDQo+Pj4+Pj4+Pj4+PiB0aGF0IGdldHMg
QVVUSF9FUlIsIHRoZW4gaXQgZ2V0cyBuZXcgY29udGV4dCBhbmQgaXMgcmV0cmllZCB1c2luZyBu
b25lDQo+Pj4+Pj4+Pj4+PiBzZXJ2aWNlLiBUaGVuIFJFTkVXIGdldHMgaXRzIG93biBBVVRIX0VS
UiBhcyBpdCB1c2VzIGEgZGlmZmVyZW50DQo+Pj4+Pj4+Pj4+PiBjb250ZXh0LCBhIG5ldyBjb250
ZXh0IGlzIGdvdHRlbiwgUkVORVcgaXMgcmV0cmllZCBvdmVyIGludGVncml0eSBhbmQNCj4+Pj4+
Pj4+Pj4+IGdldHMgRVJSX1NUQUxFX0NMSUVOVElEIHdoaWNoIGl0IHJlY292ZXJzIGZyb20uDQo+
Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBJZiBvbmUgb3BlcmF0aW9uIGlzIGFsbG93ZWQgdG8gY29t
cGxldGUsIHRoZW4NCj4+Pj4+Pj4+Pj4gdGhlIG90aGVyIHdpbGwgYWx3YXlzIHJlY29nbml6ZSB0
aGF0IGFub3RoZXINCj4+Pj4+Pj4+Pj4gZnJlc2ggR1NTIGNvbnRleHQgaXMgbmVlZGVkLiBJZiB0
d28gYXJlIHNlbnQNCj4+Pj4+Pj4+Pj4gYXQgdGhlIHNhbWUgdGltZSwgdGhleSByYWNlIGFuZCBv
bmUgYWx3YXlzDQo+Pj4+Pj4+Pj4+IGZhaWxzLg0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gSGVs
ZW4ncyB0ZXN0IGluY2x1ZGVzIGEgc2Vjb25kIGlkbGUgbW91bnQgcG9pbnQNCj4+Pj4+Pj4+Pj4g
KHNlYz1rcmI1aSkgYW5kIG1heWJlIHRoYXQgaXMgbmVlZGVkIHRvIHRyaWdnZXINCj4+Pj4+Pj4+
Pj4gdGhlIHJhY2U/DQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gQ2h1Y2ssIGFueSBjaGFuY2UgdG8g
Z2V0ICJycGNkZWJ1ZyAtbSBycGMgYXV0aCIgb3V0cHV0IGR1cmluZyB0aGUNCj4+Pj4+Pj4+PiBm
YWlsdXJlIChnc3NkIG9wdGlvbmFsbHkpIChpIHJlYWxpemUgdGhhdCBpdCBtaWdodCBhbHRlciB0
aGUgdGltaW5ncw0KPj4+Pj4+Pj4+IGFuZCBub3QgaGl0IHRoZSBpc3N1ZSBidXQgd29ydGggYSBz
aG90KT8NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gSSdtIHN1cmUgdGhhdCdzIGZpbmUuIEFuIGludGVy
bmFsIHRlc3RlciBoaXQgdGhpcywNCj4+Pj4+Pj4+IG5vdCBhIGN1c3RvbWVyLCBzbyBJIHdpbGwg
YXNrLg0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBJIGFncmVlLCB0aG91Z2gsIHRoYXQgdGltaW5nIG1p
Z2h0IGJlIGEgcHJvYmxlbToNCj4+Pj4+Pj4+IHRoZXNlIHN5c3RlbXMgYWxsIGhhdmUgcmVhbCBz
ZXJpYWwgY29uc29sZXMgdmlhDQo+Pj4+Pj4+PiBpTE9NLCBzbyAvdi9sL20gdHJhZmZpYyBkb2Vz
IGJyaW5nIGV2ZXJ5dGhpbmcgdG8NCj4+Pj4+Pj4+IGEgc3RhbmRzdGlsbC4NCj4+Pj4+Pj4+IA0K
Pj4+Pj4+Pj4gTWVhbndoaWxlLCB3aGF0J3MgeW91J3JlIG9waW5pb24gYWJvdXQgQVVUSF9GQUlM
RUQ/DQo+Pj4+Pj4+PiBTaG91bGQgdGhlIHNlcnZlciByZXR1cm4gUlBDU0VDX0dTU19DVFhQUk9C
TEVNDQo+Pj4+Pj4+PiBpbiB0aGlzIGNhc2UgaW5zdGVhZD8gSWYgaXQgZGlkLCBkbyB5b3UgdGhp
bmsNCj4+Pj4+Pj4+IHRoZSBMaW51eCBjbGllbnQgd291bGQgcmVjb3ZlciBieSBjcmVhdGluZyBh
DQo+Pj4+Pj4+PiByZXBsYWNlbWVudCBHU1MgY29udGV4dD8NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IEFo
LCB5ZXMsIEkgZXF1YXRlZCBBVVRIX0ZBSUxFRCBBbmQgQVVUSF9FUlJPUiBpbiBteSBtaW5kLiBJ
ZiBjbGllbnQNCj4+Pj4+Pj4gcmVjZWl2ZXMgdGhlIHJlYXNvbiBhcyBBVVRIX0ZBSUxFRCBhcyBv
cHBvc2UgdG8gQ1RYUFJPQkxFTSBpdCB3aWxsDQo+Pj4+Pj4+IGZhaWwgd2l0aCBFSU8gZXJyb3Ig
YW5kIHdpbGwgbm90IHRyeSB0byBjcmVhdGUgYSBuZXcgR1NTIGNvbnRleHQuIFNvDQo+Pj4+Pj4+
IHllcywgSSBiZWxpZXZlIGl0IHdvdWxkIGhlbHAgaWYgdGhlIHNlcnZlciByZXR1cm5zIGFueSBv
ZiB0aGUNCj4+Pj4+Pj4gZm9sbG93aW5nIGVycm9yczoNCj4+Pj4+Pj4gICAgICAgICAgICAgIGNh
c2UgUlBDX0FVVEhfUkVKRUNURURDUkVEOg0KPj4+Pj4+PiAgICAgICAgICAgICAgY2FzZSBSUENf
QVVUSF9SRUpFQ1RFRFZFUkY6DQo+Pj4+Pj4+ICAgICAgICAgICAgICBjYXNlIFJQQ1NFQ19HU1Nf
Q1JFRFBST0JMRU06DQo+Pj4+Pj4+ICAgICAgICAgICAgICBjYXNlIFJQQ1NFQ19HU1NfQ1RYUFJP
QkxFTToNCj4+Pj4+Pj4gDQo+Pj4+Pj4+IHRoZW4gdGhlIGNsaWVudCB3aWxsIHJlY3JlYXRlIHRo
ZSBjb250ZXh0Lg0KPj4+Pj4+IA0KPj4+Pj4+IEFsc28gaW4gbXkgdGVzdGluZywgSSBjYW4gc2Vl
IHRoYXQgY3JlZGVudGlhbCBjYWNoZSBpcyBwZXIgZ3NzIGZsYXZvci4NCj4+Pj4+PiBKdXN0IHRv
IGNoZWNrLCB3aGF0IGtlcm5lbCB2ZXJzaW9uIGlzIHRoaXMgcHJvYmxlbSBlbmNvdW50ZXJlZCBv
biAoSQ0KPj4+Pj4+IGtub3cgeW91IHNhaWQgdXBzdHJlYW0pIGJ1dCBJIGp1c3Qgd2FudCB0byBk
b3VibGUgY2hlY2sgc28gdGhhdCBJIGNhbg0KPj4+Pj4+IGxvb2sgYXQgdGhlIGNvcnJlY3Qgc291
cmNlIGNvZGUuDQo+Pj4+PiANCj4+Pj4+IHY0LjEuMTIgKHN0YWJsZSkgSSB0aGluay4NCj4+Pj4g
DQo+Pj4+IEFsc28sIGNhbiB5b3Ugc2hhcmUgdGhlIG5ldHdvcmsgdHJhY2U/DQo+Pj4gDQo+Pj4g
SGkgQ2h1Y2ssDQo+Pj4gDQo+Pj4gSSB3YXMgZmluYWxseSBhYmxlIHRvIHJlcHJvZHVjZSB0aGUg
Y29uZGl0aW9uIHlvdSB3ZXJlIHNlZWluZyAoaS5lLiwNCj4+PiB0aGUgdXNlIG9mIHRoZSBzYW1l
IGNvbnRleHQgZm9yIGRpZmZlcmVudCBnc3Mgc2VydmljZXMpLg0KPj4+IA0KPj4+IEkgZW5hYmxl
ZCBycGNkZWJ1ZyBycGMgYXV0aCBhbmQgSSBjYW4gc2VlIHRoYXQgdGhlIDJuZCByZXF1ZXN0IGVu
ZHMgdXANCj4+PiBmaW5kaW5nIGEgZ3NzX3VwY2FsbCBtZXNzYWdlIGJlY2F1c2UgaXQncyBqdXN0
IG1hdGNoZWQgYnkgdGhlIHVpZC4NCj4+PiBUaGVyZSBpcyBldmVuIGEgY29tbWVudCBpbiBhdXRo
X2dzcy9hdXRoX2dzcy5jIGluIGdzc19hZGRfbXNnKCkgc2F5aW5nDQo+Pj4gdGhhdCBpZiB0aGVy
ZSBpcyB1cGNhbGwgZm9yIGFuIHVpZCB0aGVuIGl0IHdvbid0IGFkZCBhbm90aGVyIHVwY2FsbC4N
Cj4+PiBTbyBJIHRoaW5rIHRoZSBkZWNpc2lvbiBpcyBtYWRlIHJpZ2h0IHRoZXJlIHRvIHNoYXJl
IHRoZSBzYW1lIGNvbnRleHQNCj4+PiBubyBtYXR0ZXIgdGhlIGdzcyBzZXJ2aWNlLg0KPj4gDQo+
PiBJZiBJIHVuZGVyc3RhbmQgY29ycmVjdGx5LCB0aGF0J3MganVzdCB3aGF0IEFuZHkgcHJlZGlj
dGVkLg0KPj4gDQo+PiBUaGF0IGNoZWNrIG5lZWRzIHRvIGJlIGNoYW5nZWQgdG8gYWxsb3cgYW5v
dGhlciB1cGNhbGwgdG8gYmUNCj4+IHF1ZXVlZCBpZiB0aGUgVUlEIG1hdGNoZXMgYnV0IHRoZSBH
U1Mgc2VydmljZSBkb2VzIG5vdC4NCj4gDQo+IFlvdSBzaG91bGQgYmUgYWJsZSB0byB1c2UgdGhl
IHNhbWUgY29udGV4dCB3aXRoIGRpZmZlcmVudCBzZXJ2aWNlcy4NCj4gDQo+IEFwb2xvZ2llcywg
SSBoYXZlbid0IGNhdWdodCB1cCB3aXRoIHRoZSB3aG9sZSBkaXNjdXNzaW9uIGFib3ZlLCB0aGlz
IG9uZQ0KPiBwb2ludCBqdXN0IGp1bXBlZCBvdXQgYXQgbWUuICBJZiB5b3UncmUgdHJ5aW5nIHRv
IHJlcXVlc3QgYSB3aG9sZSBuZXcNCj4gZ3NzIGNvbnRleHQganVzdCBzbyB5b3UgY2FuIHVzZSwg
ZS5nLiwgaW50ZWdyaXR5IGluc3RlYWQgb2YgcHJpdmFjeSwNCj4gdGhlbiBzb21ldGhpbmcncyB3
cm9uZy4NCg0KVGhlIGNsaWVudCBjb2RlIGhhcyBzZXBhcmF0ZSBnc3NfY3JlZCBjYWNoZXMgKGFu
ZCBzbyBzZXBhcmF0ZSBnc3NfY29udGV44oCZcykgcGVyIGdzc19hdXRoLCB3aGljaCBpcyBwZXIg
c2VydmljZS4gQUZBSUsgdGhlIGNsaWVudCBoYXMgYWx3YXlzIG9idGFpbmVkIGEgc2VwYXJhdGUg
Y29udGV4dCBwZXIgc2VydmljZSBwZXIgc2VydmVyLiBXaGlsZSB3ZSBjYW4gdXNlIHRoZSBzYW1l
IGdzcyBjb250ZXh0IHdpdGggZGlmZmVyZW50IHNlcnZpY2VzLCB0aGF0IGlzIG5vdCB0aGUgZGVz
aWduIGNob2ljZS4NCg0K4oCUPkFuZHkNCg0KDQo+IA0KPiAtLWIuDQoNCg==

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-08-02 18:06                       ` J. Bruce Fields
  2016-08-03 18:53                         ` Adamson, Andy
@ 2016-08-03 19:14                         ` Chuck Lever
  2016-08-03 19:34                           ` J. Bruce Fields
  1 sibling, 1 reply; 25+ messages in thread
From: Chuck Lever @ 2016-08-03 19:14 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: Olga Kornievskaia, Adamson, Andy, Linux NFS Mailing List


> On Aug 2, 2016, at 2:06 PM, bfields@fieldses.org wrote:
> 
> On Fri, Jul 29, 2016 at 12:38:34PM -0400, Chuck Lever wrote:
>> 
>>> On Jul 29, 2016, at 12:27 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>> 
>>> On Mon, Jul 25, 2016 at 2:18 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>> On Thu, Jul 21, 2016 at 5:32 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>> 
>>>>> 
>>>>>> On Jul 21, 2016, at 10:46 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>> 
>>>>>>> On Thu, Jul 21, 2016 at 3:54 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>>> On Thu, Jul 21, 2016 at 1:56 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>> 
>>>>>>>>> On Jul 21, 2016, at 6:04 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>>>> 
>>>>>>>>> On Thu, Jul 21, 2016 at 2:55 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> On Jul 20, 2016, at 6:56 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>>>>>> 
>>>>>>>>>>> On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
>>>>>>>>>>> <William.Adamson@netapp.com> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Andy-
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thanks for taking the time to discuss this with me. I've
>>>>>>>>>>>>> copied linux-nfs to make this e-mail also an upstream bug
>>>>>>>>>>>>> report.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> As we saw in the network capture, recovery of GSS contexts
>>>>>>>>>>>>> after a server reboot fails in certain cases with NFSv4.0
>>>>>>>>>>>>> and NFSv4.1 mount points.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The reproducer is a simple program that generates one NFS
>>>>>>>>>>>>> WRITE periodically, run while the server repeatedly reboots
>>>>>>>>>>>>> (or one cluster head fails over to the other and back). The
>>>>>>>>>>>>> goal of the reproducer is to identify problems with state
>>>>>>>>>>>>> recovery without a lot of other I/O going on to clutter up
>>>>>>>>>>>>> the network capture.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> In the failing case, sec=krb5 is specified on the mount
>>>>>>>>>>>>> point, and the reproducer is run as root. We've found this
>>>>>>>>>>>>> combination fails with both NFSv4.0 and NFSv4.1.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> At mount time, the client establishes a GSS context for
>>>>>>>>>>>>> lease management operations, which is bound to the client's
>>>>>>>>>>>>> NFS service principal and uses GSS service "integrity."
>>>>>>>>>>>>> Call this GSS context 1.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> When the reproducer starts, a second GSS context is
>>>>>>>>>>>>> established for NFS operations associated with that user.
>>>>>>>>>>>>> Since the reproducer is running as root, this context is
>>>>>>>>>>>>> also bound to the client's NFS service principal, but it
>>>>>>>>>>>>> uses the GSS service "none" (reflecting the explicit
>>>>>>>>>>>>> request for "sec=krb5"). Call this GSS context 2.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> After the server reboots, the client re-establishes a TCP
>>>>>>>>>>>>> connection with the server, and performs a RENEW
>>>>>>>>>>>>> operation using context 1. Thanks to the server reboot,
>>>>>>>>>>>>> contexts 1 and 2 are now stale. The server thus rejects
>>>>>>>>>>>>> the RPC with RPCSEC_GSS_CTXPROBLEM.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
>>>>>>>>>>>>> NULL operation. Call this GSS context 3.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Interestingly, the client does not resend the RENEW
>>>>>>>>>>>>> operation at this point (if it did, we wouldn't see this
>>>>>>>>>>>>> problem at all).
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The client then attempts to resume the reproducer workload.
>>>>>>>>>>>>> It sends an NFSv4 WRITE operation, using the first available
>>>>>>>>>>>>> GSS context in UID 0's credential cache, which is context 3,
>>>>>>>>>>>>> already bound to the client's NFS service principal. But GSS
>>>>>>>>>>>>> service "none" is used for this operation, since it is on
>>>>>>>>>>>>> behalf of the mount where sec=krb5 was specified.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The RPC is accepted, but the server reports
>>>>>>>>>>>>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The client responds by attempting state recovery. The
>>>>>>>>>>>>> first operation it tries is another RENEW. Since this is
>>>>>>>>>>>>> a lease management operation, the client looks in UID 0's
>>>>>>>>>>>>> credential cache again and finds the recently established
>>>>>>>>>>>>> context 3. It tries the RENEW operation using GSS context
>>>>>>>>>>>>> 3 with GSS service "integrity."
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The server rejects the RENEW RPC with AUTH_FAILED, and
>>>>>>>>>>>>> the client reports that "check lease failed" and
>>>>>>>>>>>>> terminates state recovery.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The client re-drives the WRITE operation with the stale
>>>>>>>>>>>>> stateid with predictable results. The client again tries
>>>>>>>>>>>>> to recover state by sending a RENEW, and still uses the
>>>>>>>>>>>>> same GSS context 3 with service "integrity" and gets the
>>>>>>>>>>>>> same result. A (perhaps slow-motion) STALE_STATEID loop
>>>>>>>>>>>>> ensues, and the client mount point is deadlocked.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Your analysis was that because the reproducer is run as
>>>>>>>>>>>>> root, both the reproducer's I/O operations, and lease
>>>>>>>>>>>>> management operations, attempt to use the same GSS context
>>>>>>>>>>>>> in UID 0's credential cache, but each uses different GSS
>>>>>>>>>>>>> services.
>>>>>>>>>>>> 
>>>>>>>>>>>> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
>>>>>>>>>>>> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
>>>>>>>>>>>> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>>> The key issue seems to be why, when the mount
>>>>>>>>>>>>> is first established, the client is correctly able to
>>>>>>>>>>>>> establish two separate GSS contexts for UID 0; but after
>>>>>>>>>>>>> a server reboot, the client attempts to use the same GSS
>>>>>>>>>>>>> context with two different GSS services.
>>>>>>>>>>>> 
>>>>>>>>>>>> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.
>>>>>>>>>>> 
>>>>>>>>>>> I agree with Andy. It must be a tight race.
>>>>>>>>>> 
>>>>>>>>>> In one capture I see something like this after
>>>>>>>>>> the server restarts:
>>>>>>>>>> 
>>>>>>>>>> SYN
>>>>>>>>>> SYN, ACK
>>>>>>>>>> ACK
>>>>>>>>>> C WRITE
>>>>>>>>>> C SEQUENCE
>>>>>>>>>> R WRITE -> CTX_PROBLEM
>>>>>>>>>> R SEQUENCE -> CTX_PROBLEM
>>>>>>>>>> C NULL (KRB5_AP_REQ)
>>>>>>>>>> R NULL (KRB5_AP_REP)
>>>>>>>>>> C WRITE
>>>>>>>>>> C SEQUENCE
>>>>>>>>>> R WRITE -> NFS4ERR_STALE_STATEID
>>>>>>>>>> R SEQUENCE -> AUTH_FAILED
>>>>>>>>>> 
>>>>>>>>>> Andy's theory neatly explains this behavior.
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>>> I have tried to reproduce
>>>>>>>>>>> your scenario and in my tests of rebooting the server all recover
>>>>>>>>>>> correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
>>>>>>>>>>> the new context is established and then RENEW using integrity service
>>>>>>>>>>> is retried with the new context which gets ERR_STALE_CLIENTID which
>>>>>>>>>>> then client recovers from. If it's an operation (I have a GETATTR)
>>>>>>>>>>> that gets AUTH_ERR, then it gets new context and is retried using none
>>>>>>>>>>> service. Then RENEW gets its own AUTH_ERR as it uses a different
>>>>>>>>>>> context, a new context is gotten, RENEW is retried over integrity and
>>>>>>>>>>> gets ERR_STALE_CLIENTID which it recovers from.
>>>>>>>>>> 
>>>>>>>>>> If one operation is allowed to complete, then
>>>>>>>>>> the other will always recognize that another
>>>>>>>>>> fresh GSS context is needed. If two are sent
>>>>>>>>>> at the same time, they race and one always
>>>>>>>>>> fails.
>>>>>>>>>> 
>>>>>>>>>> Helen's test includes a second idle mount point
>>>>>>>>>> (sec=krb5i) and maybe that is needed to trigger
>>>>>>>>>> the race?
>>>>>>>>> 
>>>>>>>>> Chuck, any chance to get "rpcdebug -m rpc auth" output during the
>>>>>>>>> failure (gssd optionally) (i realize that it might alter the timings
>>>>>>>>> and not hit the issue but worth a shot)?
>>>>>>>> 
>>>>>>>> I'm sure that's fine. An internal tester hit this,
>>>>>>>> not a customer, so I will ask.
>>>>>>>> 
>>>>>>>> I agree, though, that timing might be a problem:
>>>>>>>> these systems all have real serial consoles via
>>>>>>>> iLOM, so /v/l/m traffic does bring everything to
>>>>>>>> a standstill.
>>>>>>>> 
>>>>>>>> Meanwhile, what's you're opinion about AUTH_FAILED?
>>>>>>>> Should the server return RPCSEC_GSS_CTXPROBLEM
>>>>>>>> in this case instead? If it did, do you think
>>>>>>>> the Linux client would recover by creating a
>>>>>>>> replacement GSS context?
>>>>>>> 
>>>>>>> Ah, yes, I equated AUTH_FAILED And AUTH_ERROR in my mind. If client
>>>>>>> receives the reason as AUTH_FAILED as oppose to CTXPROBLEM it will
>>>>>>> fail with EIO error and will not try to create a new GSS context. So
>>>>>>> yes, I believe it would help if the server returns any of the
>>>>>>> following errors:
>>>>>>>              case RPC_AUTH_REJECTEDCRED:
>>>>>>>              case RPC_AUTH_REJECTEDVERF:
>>>>>>>              case RPCSEC_GSS_CREDPROBLEM:
>>>>>>>              case RPCSEC_GSS_CTXPROBLEM:
>>>>>>> 
>>>>>>> then the client will recreate the context.
>>>>>> 
>>>>>> Also in my testing, I can see that credential cache is per gss flavor.
>>>>>> Just to check, what kernel version is this problem encountered on (I
>>>>>> know you said upstream) but I just want to double check so that I can
>>>>>> look at the correct source code.
>>>>> 
>>>>> v4.1.12 (stable) I think.
>>>> 
>>>> Also, can you share the network trace?
>>> 
>>> Hi Chuck,
>>> 
>>> I was finally able to reproduce the condition you were seeing (i.e.,
>>> the use of the same context for different gss services).
>>> 
>>> I enabled rpcdebug rpc auth and I can see that the 2nd request ends up
>>> finding a gss_upcall message because it's just matched by the uid.
>>> There is even a comment in auth_gss/auth_gss.c in gss_add_msg() saying
>>> that if there is upcall for an uid then it won't add another upcall.
>>> So I think the decision is made right there to share the same context
>>> no matter the gss service.
>> 
>> If I understand correctly, that's just what Andy predicted.
>> 
>> That check needs to be changed to allow another upcall to be
>> queued if the UID matches but the GSS service does not.
> 
> You should be able to use the same context with different services.
> 
> Apologies, I haven't caught up with the whole discussion above, this one
> point just jumped out at me.  If you're trying to request a whole new
> gss context just so you can use, e.g., integrity instead of privacy,
> then something's wrong.

Hi Bruce-

As I understand it, GSS contexts are fungible until they have been
used. On first use, the context is bound to a particular service.
Subsequently it cannot be used with another service.

The Solaris server seems to expect that separate GSS contexts are
needed when the same UID employs different GSS services. If Solaris
is wrong about this, can you show me RFC language that specifically
allows it? I can take that back to the Solaris developers.

Otherwise, it seems that the Linux client also believes separate
contexts are necessary: two contexts are set up for UID 0 after the
initial mount processing is complete.


--
Chuck Lever




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-08-03 19:14                         ` Chuck Lever
@ 2016-08-03 19:34                           ` J. Bruce Fields
  0 siblings, 0 replies; 25+ messages in thread
From: J. Bruce Fields @ 2016-08-03 19:34 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Olga Kornievskaia, Adamson, Andy, Linux NFS Mailing List

On Wed, Aug 03, 2016 at 03:14:21PM -0400, Chuck Lever wrote:
> 
> On Aug 2, 2016, at 2:06 PM, bfields@fieldses.org wrote:
> > You should be able to use the same context with different services.
> > 
> > Apologies, I haven't caught up with the whole discussion above, this one
> > point just jumped out at me.  If you're trying to request a whole new
> > gss context just so you can use, e.g., integrity instead of privacy,
> > then something's wrong.
> 
> As I understand it, GSS contexts are fungible until they have been
> used. On first use, the context is bound to a particular service.
> Subsequently it cannot be used with another service.
> 
> The Solaris server seems to expect that separate GSS contexts are
> needed when the same UID employs different GSS services. If Solaris
> is wrong about this, can you show me RFC language that specifically
> allows it? I can take that back to the Solaris developers.

No, you're right, apologies; from https://tools.ietf.org/html/rfc2203

	Although clients can change the security service and QOP used on
	a per-request basis, this may not be acceptable to all RPC
	services; some RPC services may "lock" the data exchange phase
	into using the QOP and service used on the first data exchange
	message.

--b.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-08-03 18:53                         ` Adamson, Andy
@ 2016-08-03 19:56                           ` Olga Kornievskaia
  2016-08-03 20:06                             ` J. Bruce Fields
  0 siblings, 1 reply; 25+ messages in thread
From: Olga Kornievskaia @ 2016-08-03 19:56 UTC (permalink / raw)
  To: Adamson, Andy; +Cc: J. Bruce Fields, Chuck Lever, Linux NFS Mailing List

On Wed, Aug 3, 2016 at 2:53 PM, Adamson, Andy
<William.Adamson@netapp.com> wrote:
>
>> On Aug 2, 2016, at 2:06 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
>>
>> On Fri, Jul 29, 2016 at 12:38:34PM -0400, Chuck Lever wrote:
>>>
>>>> On Jul 29, 2016, at 12:27 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>
>>>> On Mon, Jul 25, 2016 at 2:18 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>> On Thu, Jul 21, 2016 at 5:32 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>
>>>>>>
>>>>>>> On Jul 21, 2016, at 10:46 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>>
>>>>>>>> On Thu, Jul 21, 2016 at 3:54 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>>>> On Thu, Jul 21, 2016 at 1:56 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>>>
>>>>>>>>>> On Jul 21, 2016, at 6:04 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>>>>>
>>>>>>>>>> On Thu, Jul 21, 2016 at 2:55 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> On Jul 20, 2016, at 6:56 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
>>>>>>>>>>>> <William.Adamson@netapp.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi Andy-
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks for taking the time to discuss this with me. I've
>>>>>>>>>>>>>> copied linux-nfs to make this e-mail also an upstream bug
>>>>>>>>>>>>>> report.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> As we saw in the network capture, recovery of GSS contexts
>>>>>>>>>>>>>> after a server reboot fails in certain cases with NFSv4.0
>>>>>>>>>>>>>> and NFSv4.1 mount points.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The reproducer is a simple program that generates one NFS
>>>>>>>>>>>>>> WRITE periodically, run while the server repeatedly reboots
>>>>>>>>>>>>>> (or one cluster head fails over to the other and back). The
>>>>>>>>>>>>>> goal of the reproducer is to identify problems with state
>>>>>>>>>>>>>> recovery without a lot of other I/O going on to clutter up
>>>>>>>>>>>>>> the network capture.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In the failing case, sec=krb5 is specified on the mount
>>>>>>>>>>>>>> point, and the reproducer is run as root. We've found this
>>>>>>>>>>>>>> combination fails with both NFSv4.0 and NFSv4.1.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> At mount time, the client establishes a GSS context for
>>>>>>>>>>>>>> lease management operations, which is bound to the client's
>>>>>>>>>>>>>> NFS service principal and uses GSS service "integrity."
>>>>>>>>>>>>>> Call this GSS context 1.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> When the reproducer starts, a second GSS context is
>>>>>>>>>>>>>> established for NFS operations associated with that user.
>>>>>>>>>>>>>> Since the reproducer is running as root, this context is
>>>>>>>>>>>>>> also bound to the client's NFS service principal, but it
>>>>>>>>>>>>>> uses the GSS service "none" (reflecting the explicit
>>>>>>>>>>>>>> request for "sec=krb5"). Call this GSS context 2.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> After the server reboots, the client re-establishes a TCP
>>>>>>>>>>>>>> connection with the server, and performs a RENEW
>>>>>>>>>>>>>> operation using context 1. Thanks to the server reboot,
>>>>>>>>>>>>>> contexts 1 and 2 are now stale. The server thus rejects
>>>>>>>>>>>>>> the RPC with RPCSEC_GSS_CTXPROBLEM.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
>>>>>>>>>>>>>> NULL operation. Call this GSS context 3.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Interestingly, the client does not resend the RENEW
>>>>>>>>>>>>>> operation at this point (if it did, we wouldn't see this
>>>>>>>>>>>>>> problem at all).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The client then attempts to resume the reproducer workload.
>>>>>>>>>>>>>> It sends an NFSv4 WRITE operation, using the first available
>>>>>>>>>>>>>> GSS context in UID 0's credential cache, which is context 3,
>>>>>>>>>>>>>> already bound to the client's NFS service principal. But GSS
>>>>>>>>>>>>>> service "none" is used for this operation, since it is on
>>>>>>>>>>>>>> behalf of the mount where sec=krb5 was specified.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The RPC is accepted, but the server reports
>>>>>>>>>>>>>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The client responds by attempting state recovery. The
>>>>>>>>>>>>>> first operation it tries is another RENEW. Since this is
>>>>>>>>>>>>>> a lease management operation, the client looks in UID 0's
>>>>>>>>>>>>>> credential cache again and finds the recently established
>>>>>>>>>>>>>> context 3. It tries the RENEW operation using GSS context
>>>>>>>>>>>>>> 3 with GSS service "integrity."
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The server rejects the RENEW RPC with AUTH_FAILED, and
>>>>>>>>>>>>>> the client reports that "check lease failed" and
>>>>>>>>>>>>>> terminates state recovery.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The client re-drives the WRITE operation with the stale
>>>>>>>>>>>>>> stateid with predictable results. The client again tries
>>>>>>>>>>>>>> to recover state by sending a RENEW, and still uses the
>>>>>>>>>>>>>> same GSS context 3 with service "integrity" and gets the
>>>>>>>>>>>>>> same result. A (perhaps slow-motion) STALE_STATEID loop
>>>>>>>>>>>>>> ensues, and the client mount point is deadlocked.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Your analysis was that because the reproducer is run as
>>>>>>>>>>>>>> root, both the reproducer's I/O operations, and lease
>>>>>>>>>>>>>> management operations, attempt to use the same GSS context
>>>>>>>>>>>>>> in UID 0's credential cache, but each uses different GSS
>>>>>>>>>>>>>> services.
>>>>>>>>>>>>>
>>>>>>>>>>>>> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
>>>>>>>>>>>>> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
>>>>>>>>>>>>> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>> The key issue seems to be why, when the mount
>>>>>>>>>>>>>> is first established, the client is correctly able to
>>>>>>>>>>>>>> establish two separate GSS contexts for UID 0; but after
>>>>>>>>>>>>>> a server reboot, the client attempts to use the same GSS
>>>>>>>>>>>>>> context with two different GSS services.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.
>>>>>>>>>>>>
>>>>>>>>>>>> I agree with Andy. It must be a tight race.
>>>>>>>>>>>
>>>>>>>>>>> In one capture I see something like this after
>>>>>>>>>>> the server restarts:
>>>>>>>>>>>
>>>>>>>>>>> SYN
>>>>>>>>>>> SYN, ACK
>>>>>>>>>>> ACK
>>>>>>>>>>> C WRITE
>>>>>>>>>>> C SEQUENCE
>>>>>>>>>>> R WRITE -> CTX_PROBLEM
>>>>>>>>>>> R SEQUENCE -> CTX_PROBLEM
>>>>>>>>>>> C NULL (KRB5_AP_REQ)
>>>>>>>>>>> R NULL (KRB5_AP_REP)
>>>>>>>>>>> C WRITE
>>>>>>>>>>> C SEQUENCE
>>>>>>>>>>> R WRITE -> NFS4ERR_STALE_STATEID
>>>>>>>>>>> R SEQUENCE -> AUTH_FAILED
>>>>>>>>>>>
>>>>>>>>>>> Andy's theory neatly explains this behavior.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> I have tried to reproduce
>>>>>>>>>>>> your scenario and in my tests of rebooting the server all recover
>>>>>>>>>>>> correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
>>>>>>>>>>>> the new context is established and then RENEW using integrity service
>>>>>>>>>>>> is retried with the new context which gets ERR_STALE_CLIENTID which
>>>>>>>>>>>> then client recovers from. If it's an operation (I have a GETATTR)
>>>>>>>>>>>> that gets AUTH_ERR, then it gets new context and is retried using none
>>>>>>>>>>>> service. Then RENEW gets its own AUTH_ERR as it uses a different
>>>>>>>>>>>> context, a new context is gotten, RENEW is retried over integrity and
>>>>>>>>>>>> gets ERR_STALE_CLIENTID which it recovers from.
>>>>>>>>>>>
>>>>>>>>>>> If one operation is allowed to complete, then
>>>>>>>>>>> the other will always recognize that another
>>>>>>>>>>> fresh GSS context is needed. If two are sent
>>>>>>>>>>> at the same time, they race and one always
>>>>>>>>>>> fails.
>>>>>>>>>>>
>>>>>>>>>>> Helen's test includes a second idle mount point
>>>>>>>>>>> (sec=krb5i) and maybe that is needed to trigger
>>>>>>>>>>> the race?
>>>>>>>>>>
>>>>>>>>>> Chuck, any chance to get "rpcdebug -m rpc auth" output during the
>>>>>>>>>> failure (gssd optionally) (i realize that it might alter the timings
>>>>>>>>>> and not hit the issue but worth a shot)?
>>>>>>>>>
>>>>>>>>> I'm sure that's fine. An internal tester hit this,
>>>>>>>>> not a customer, so I will ask.
>>>>>>>>>
>>>>>>>>> I agree, though, that timing might be a problem:
>>>>>>>>> these systems all have real serial consoles via
>>>>>>>>> iLOM, so /v/l/m traffic does bring everything to
>>>>>>>>> a standstill.
>>>>>>>>>
>>>>>>>>> Meanwhile, what's you're opinion about AUTH_FAILED?
>>>>>>>>> Should the server return RPCSEC_GSS_CTXPROBLEM
>>>>>>>>> in this case instead? If it did, do you think
>>>>>>>>> the Linux client would recover by creating a
>>>>>>>>> replacement GSS context?
>>>>>>>>
>>>>>>>> Ah, yes, I equated AUTH_FAILED And AUTH_ERROR in my mind. If client
>>>>>>>> receives the reason as AUTH_FAILED as oppose to CTXPROBLEM it will
>>>>>>>> fail with EIO error and will not try to create a new GSS context. So
>>>>>>>> yes, I believe it would help if the server returns any of the
>>>>>>>> following errors:
>>>>>>>>              case RPC_AUTH_REJECTEDCRED:
>>>>>>>>              case RPC_AUTH_REJECTEDVERF:
>>>>>>>>              case RPCSEC_GSS_CREDPROBLEM:
>>>>>>>>              case RPCSEC_GSS_CTXPROBLEM:
>>>>>>>>
>>>>>>>> then the client will recreate the context.
>>>>>>>
>>>>>>> Also in my testing, I can see that credential cache is per gss flavor.
>>>>>>> Just to check, what kernel version is this problem encountered on (I
>>>>>>> know you said upstream) but I just want to double check so that I can
>>>>>>> look at the correct source code.
>>>>>>
>>>>>> v4.1.12 (stable) I think.
>>>>>
>>>>> Also, can you share the network trace?
>>>>
>>>> Hi Chuck,
>>>>
>>>> I was finally able to reproduce the condition you were seeing (i.e.,
>>>> the use of the same context for different gss services).
>>>>
>>>> I enabled rpcdebug rpc auth and I can see that the 2nd request ends up
>>>> finding a gss_upcall message because it's just matched by the uid.
>>>> There is even a comment in auth_gss/auth_gss.c in gss_add_msg() saying
>>>> that if there is upcall for an uid then it won't add another upcall.
>>>> So I think the decision is made right there to share the same context
>>>> no matter the gss service.
>>>
>>> If I understand correctly, that's just what Andy predicted.
>>>
>>> That check needs to be changed to allow another upcall to be
>>> queued if the UID matches but the GSS service does not.
>>
>> You should be able to use the same context with different services.
>>
>> Apologies, I haven't caught up with the whole discussion above, this one
>> point just jumped out at me.  If you're trying to request a whole new
>> gss context just so you can use, e.g., integrity instead of privacy,
>> then something's wrong.
>
> The client code has separate gss_cred caches (and so separate gss_contex’s) per gss_auth, which is per service. AFAIK the client has always obtained a separate context per service per server. While we can use the same gss context with different services, that is not the design choice.

Given the current code, I'd say that it's not clear what the design
choice is. The upcall code states that it will not do another upcall
for a given UID if another upcall is already made. So it's purely
timing luck that we have two different contexts for the mount when
sec=krb5 and the default for lease operations is "integrity". And we
don't pass the GSS service between the kernel and gssd and thus no way
for us to tie the upcall to the service.

>
> —>Andy
>
>
>>
>> --b.
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-08-03 19:56                           ` Olga Kornievskaia
@ 2016-08-03 20:06                             ` J. Bruce Fields
  2016-08-03 20:11                               ` Olga Kornievskaia
  2016-08-03 20:18                               ` Adamson, Andy
  0 siblings, 2 replies; 25+ messages in thread
From: J. Bruce Fields @ 2016-08-03 20:06 UTC (permalink / raw)
  To: Olga Kornievskaia; +Cc: Adamson, Andy, Chuck Lever, Linux NFS Mailing List

On Wed, Aug 03, 2016 at 03:56:31PM -0400, Olga Kornievskaia wrote:
> On Wed, Aug 3, 2016 at 2:53 PM, Adamson, Andy
> <William.Adamson@netapp.com> wrote:
> >
> >> On Aug 2, 2016, at 2:06 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
> >>
> >> On Fri, Jul 29, 2016 at 12:38:34PM -0400, Chuck Lever wrote:
> >>>
> >>>> On Jul 29, 2016, at 12:27 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >>>>
> >>>> On Mon, Jul 25, 2016 at 2:18 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >>>>> On Thu, Jul 21, 2016 at 5:32 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
> >>>>>>
> >>>>>>
> >>>>>>> On Jul 21, 2016, at 10:46 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >>>>>>>
> >>>>>>>> On Thu, Jul 21, 2016 at 3:54 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >>>>>>>>> On Thu, Jul 21, 2016 at 1:56 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
> >>>>>>>>>
> >>>>>>>>>> On Jul 21, 2016, at 6:04 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >>>>>>>>>>
> >>>>>>>>>> On Thu, Jul 21, 2016 at 2:55 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>>> On Jul 20, 2016, at 6:56 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >>>>>>>>>>>>
> >>>>>>>>>>>> On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
> >>>>>>>>>>>> <William.Adamson@netapp.com> wrote:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Hi Andy-
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Thanks for taking the time to discuss this with me. I've
> >>>>>>>>>>>>>> copied linux-nfs to make this e-mail also an upstream bug
> >>>>>>>>>>>>>> report.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> As we saw in the network capture, recovery of GSS contexts
> >>>>>>>>>>>>>> after a server reboot fails in certain cases with NFSv4.0
> >>>>>>>>>>>>>> and NFSv4.1 mount points.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> The reproducer is a simple program that generates one NFS
> >>>>>>>>>>>>>> WRITE periodically, run while the server repeatedly reboots
> >>>>>>>>>>>>>> (or one cluster head fails over to the other and back). The
> >>>>>>>>>>>>>> goal of the reproducer is to identify problems with state
> >>>>>>>>>>>>>> recovery without a lot of other I/O going on to clutter up
> >>>>>>>>>>>>>> the network capture.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> In the failing case, sec=krb5 is specified on the mount
> >>>>>>>>>>>>>> point, and the reproducer is run as root. We've found this
> >>>>>>>>>>>>>> combination fails with both NFSv4.0 and NFSv4.1.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> At mount time, the client establishes a GSS context for
> >>>>>>>>>>>>>> lease management operations, which is bound to the client's
> >>>>>>>>>>>>>> NFS service principal and uses GSS service "integrity."
> >>>>>>>>>>>>>> Call this GSS context 1.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> When the reproducer starts, a second GSS context is
> >>>>>>>>>>>>>> established for NFS operations associated with that user.
> >>>>>>>>>>>>>> Since the reproducer is running as root, this context is
> >>>>>>>>>>>>>> also bound to the client's NFS service principal, but it
> >>>>>>>>>>>>>> uses the GSS service "none" (reflecting the explicit
> >>>>>>>>>>>>>> request for "sec=krb5"). Call this GSS context 2.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> After the server reboots, the client re-establishes a TCP
> >>>>>>>>>>>>>> connection with the server, and performs a RENEW
> >>>>>>>>>>>>>> operation using context 1. Thanks to the server reboot,
> >>>>>>>>>>>>>> contexts 1 and 2 are now stale. The server thus rejects
> >>>>>>>>>>>>>> the RPC with RPCSEC_GSS_CTXPROBLEM.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
> >>>>>>>>>>>>>> NULL operation. Call this GSS context 3.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Interestingly, the client does not resend the RENEW
> >>>>>>>>>>>>>> operation at this point (if it did, we wouldn't see this
> >>>>>>>>>>>>>> problem at all).
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> The client then attempts to resume the reproducer workload.
> >>>>>>>>>>>>>> It sends an NFSv4 WRITE operation, using the first available
> >>>>>>>>>>>>>> GSS context in UID 0's credential cache, which is context 3,
> >>>>>>>>>>>>>> already bound to the client's NFS service principal. But GSS
> >>>>>>>>>>>>>> service "none" is used for this operation, since it is on
> >>>>>>>>>>>>>> behalf of the mount where sec=krb5 was specified.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> The RPC is accepted, but the server reports
> >>>>>>>>>>>>>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> The client responds by attempting state recovery. The
> >>>>>>>>>>>>>> first operation it tries is another RENEW. Since this is
> >>>>>>>>>>>>>> a lease management operation, the client looks in UID 0's
> >>>>>>>>>>>>>> credential cache again and finds the recently established
> >>>>>>>>>>>>>> context 3. It tries the RENEW operation using GSS context
> >>>>>>>>>>>>>> 3 with GSS service "integrity."
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> The server rejects the RENEW RPC with AUTH_FAILED, and
> >>>>>>>>>>>>>> the client reports that "check lease failed" and
> >>>>>>>>>>>>>> terminates state recovery.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> The client re-drives the WRITE operation with the stale
> >>>>>>>>>>>>>> stateid with predictable results. The client again tries
> >>>>>>>>>>>>>> to recover state by sending a RENEW, and still uses the
> >>>>>>>>>>>>>> same GSS context 3 with service "integrity" and gets the
> >>>>>>>>>>>>>> same result. A (perhaps slow-motion) STALE_STATEID loop
> >>>>>>>>>>>>>> ensues, and the client mount point is deadlocked.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Your analysis was that because the reproducer is run as
> >>>>>>>>>>>>>> root, both the reproducer's I/O operations, and lease
> >>>>>>>>>>>>>> management operations, attempt to use the same GSS context
> >>>>>>>>>>>>>> in UID 0's credential cache, but each uses different GSS
> >>>>>>>>>>>>>> services.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
> >>>>>>>>>>>>> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
> >>>>>>>>>>>>> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>> The key issue seems to be why, when the mount
> >>>>>>>>>>>>>> is first established, the client is correctly able to
> >>>>>>>>>>>>>> establish two separate GSS contexts for UID 0; but after
> >>>>>>>>>>>>>> a server reboot, the client attempts to use the same GSS
> >>>>>>>>>>>>>> context with two different GSS services.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.
> >>>>>>>>>>>>
> >>>>>>>>>>>> I agree with Andy. It must be a tight race.
> >>>>>>>>>>>
> >>>>>>>>>>> In one capture I see something like this after
> >>>>>>>>>>> the server restarts:
> >>>>>>>>>>>
> >>>>>>>>>>> SYN
> >>>>>>>>>>> SYN, ACK
> >>>>>>>>>>> ACK
> >>>>>>>>>>> C WRITE
> >>>>>>>>>>> C SEQUENCE
> >>>>>>>>>>> R WRITE -> CTX_PROBLEM
> >>>>>>>>>>> R SEQUENCE -> CTX_PROBLEM
> >>>>>>>>>>> C NULL (KRB5_AP_REQ)
> >>>>>>>>>>> R NULL (KRB5_AP_REP)
> >>>>>>>>>>> C WRITE
> >>>>>>>>>>> C SEQUENCE
> >>>>>>>>>>> R WRITE -> NFS4ERR_STALE_STATEID
> >>>>>>>>>>> R SEQUENCE -> AUTH_FAILED
> >>>>>>>>>>>
> >>>>>>>>>>> Andy's theory neatly explains this behavior.
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>> I have tried to reproduce
> >>>>>>>>>>>> your scenario and in my tests of rebooting the server all recover
> >>>>>>>>>>>> correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
> >>>>>>>>>>>> the new context is established and then RENEW using integrity service
> >>>>>>>>>>>> is retried with the new context which gets ERR_STALE_CLIENTID which
> >>>>>>>>>>>> then client recovers from. If it's an operation (I have a GETATTR)
> >>>>>>>>>>>> that gets AUTH_ERR, then it gets new context and is retried using none
> >>>>>>>>>>>> service. Then RENEW gets its own AUTH_ERR as it uses a different
> >>>>>>>>>>>> context, a new context is gotten, RENEW is retried over integrity and
> >>>>>>>>>>>> gets ERR_STALE_CLIENTID which it recovers from.
> >>>>>>>>>>>
> >>>>>>>>>>> If one operation is allowed to complete, then
> >>>>>>>>>>> the other will always recognize that another
> >>>>>>>>>>> fresh GSS context is needed. If two are sent
> >>>>>>>>>>> at the same time, they race and one always
> >>>>>>>>>>> fails.
> >>>>>>>>>>>
> >>>>>>>>>>> Helen's test includes a second idle mount point
> >>>>>>>>>>> (sec=krb5i) and maybe that is needed to trigger
> >>>>>>>>>>> the race?
> >>>>>>>>>>
> >>>>>>>>>> Chuck, any chance to get "rpcdebug -m rpc auth" output during the
> >>>>>>>>>> failure (gssd optionally) (i realize that it might alter the timings
> >>>>>>>>>> and not hit the issue but worth a shot)?
> >>>>>>>>>
> >>>>>>>>> I'm sure that's fine. An internal tester hit this,
> >>>>>>>>> not a customer, so I will ask.
> >>>>>>>>>
> >>>>>>>>> I agree, though, that timing might be a problem:
> >>>>>>>>> these systems all have real serial consoles via
> >>>>>>>>> iLOM, so /v/l/m traffic does bring everything to
> >>>>>>>>> a standstill.
> >>>>>>>>>
> >>>>>>>>> Meanwhile, what's you're opinion about AUTH_FAILED?
> >>>>>>>>> Should the server return RPCSEC_GSS_CTXPROBLEM
> >>>>>>>>> in this case instead? If it did, do you think
> >>>>>>>>> the Linux client would recover by creating a
> >>>>>>>>> replacement GSS context?
> >>>>>>>>
> >>>>>>>> Ah, yes, I equated AUTH_FAILED And AUTH_ERROR in my mind. If client
> >>>>>>>> receives the reason as AUTH_FAILED as oppose to CTXPROBLEM it will
> >>>>>>>> fail with EIO error and will not try to create a new GSS context. So
> >>>>>>>> yes, I believe it would help if the server returns any of the
> >>>>>>>> following errors:
> >>>>>>>>              case RPC_AUTH_REJECTEDCRED:
> >>>>>>>>              case RPC_AUTH_REJECTEDVERF:
> >>>>>>>>              case RPCSEC_GSS_CREDPROBLEM:
> >>>>>>>>              case RPCSEC_GSS_CTXPROBLEM:
> >>>>>>>>
> >>>>>>>> then the client will recreate the context.
> >>>>>>>
> >>>>>>> Also in my testing, I can see that credential cache is per gss flavor.
> >>>>>>> Just to check, what kernel version is this problem encountered on (I
> >>>>>>> know you said upstream) but I just want to double check so that I can
> >>>>>>> look at the correct source code.
> >>>>>>
> >>>>>> v4.1.12 (stable) I think.
> >>>>>
> >>>>> Also, can you share the network trace?
> >>>>
> >>>> Hi Chuck,
> >>>>
> >>>> I was finally able to reproduce the condition you were seeing (i.e.,
> >>>> the use of the same context for different gss services).
> >>>>
> >>>> I enabled rpcdebug rpc auth and I can see that the 2nd request ends up
> >>>> finding a gss_upcall message because it's just matched by the uid.
> >>>> There is even a comment in auth_gss/auth_gss.c in gss_add_msg() saying
> >>>> that if there is upcall for an uid then it won't add another upcall.
> >>>> So I think the decision is made right there to share the same context
> >>>> no matter the gss service.
> >>>
> >>> If I understand correctly, that's just what Andy predicted.
> >>>
> >>> That check needs to be changed to allow another upcall to be
> >>> queued if the UID matches but the GSS service does not.
> >>
> >> You should be able to use the same context with different services.
> >>
> >> Apologies, I haven't caught up with the whole discussion above, this one
> >> point just jumped out at me.  If you're trying to request a whole new
> >> gss context just so you can use, e.g., integrity instead of privacy,
> >> then something's wrong.
> >
> > The client code has separate gss_cred caches (and so separate gss_contex’s) per gss_auth, which is per service. AFAIK the client has always obtained a separate context per service per server. While we can use the same gss context with different services, that is not the design choice.
> 
> Given the current code, I'd say that it's not clear what the design
> choice is. The upcall code states that it will not do another upcall
> for a given UID if another upcall is already made. So it's purely
> timing luck that we have two different contexts for the mount when
> sec=krb5 and the default for lease operations is "integrity". And we
> don't pass the GSS service between the kernel and gssd and thus no way
> for us to tie the upcall to the service.

Well, I wrote some of the upcall code, and I though (incorrectly, I
guess) that clients could reuse the same context with multiple services,
so the confusion may just be my fault.  So you think the only fix here
is to key the upcalls on (mechanism, service, uid) instead of just
(mechanism, uid)?

--b.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-08-03 20:06                             ` J. Bruce Fields
@ 2016-08-03 20:11                               ` Olga Kornievskaia
  2016-08-03 20:18                               ` Adamson, Andy
  1 sibling, 0 replies; 25+ messages in thread
From: Olga Kornievskaia @ 2016-08-03 20:11 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: Adamson, Andy, Chuck Lever, Linux NFS Mailing List

On Wed, Aug 3, 2016 at 4:06 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
> On Wed, Aug 03, 2016 at 03:56:31PM -0400, Olga Kornievskaia wrote:
>> On Wed, Aug 3, 2016 at 2:53 PM, Adamson, Andy
>> <William.Adamson@netapp.com> wrote:
>> >
>> >> On Aug 2, 2016, at 2:06 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
>> >>
>> >> On Fri, Jul 29, 2016 at 12:38:34PM -0400, Chuck Lever wrote:
>> >>>
>> >>>> On Jul 29, 2016, at 12:27 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>> >>>>
>> >>>> On Mon, Jul 25, 2016 at 2:18 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>> >>>>> On Thu, Jul 21, 2016 at 5:32 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>> >>>>>>
>> >>>>>>
>> >>>>>>> On Jul 21, 2016, at 10:46 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>> >>>>>>>
>> >>>>>>>> On Thu, Jul 21, 2016 at 3:54 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>> >>>>>>>>> On Thu, Jul 21, 2016 at 1:56 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>> >>>>>>>>>
>> >>>>>>>>>> On Jul 21, 2016, at 6:04 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>> >>>>>>>>>>
>> >>>>>>>>>> On Thu, Jul 21, 2016 at 2:55 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>> >>>>>>>>>>>
>> >>>>>>>>>>>> On Jul 20, 2016, at 6:56 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>> >>>>>>>>>>>>
>> >>>>>>>>>>>> On Wed, Jul 20, 2016 at 5:14 AM, Adamson, Andy
>> >>>>>>>>>>>> <William.Adamson@netapp.com> wrote:
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>>> On Jul 19, 2016, at 10:51 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> Hi Andy-
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> Thanks for taking the time to discuss this with me. I've
>> >>>>>>>>>>>>>> copied linux-nfs to make this e-mail also an upstream bug
>> >>>>>>>>>>>>>> report.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> As we saw in the network capture, recovery of GSS contexts
>> >>>>>>>>>>>>>> after a server reboot fails in certain cases with NFSv4.0
>> >>>>>>>>>>>>>> and NFSv4.1 mount points.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> The reproducer is a simple program that generates one NFS
>> >>>>>>>>>>>>>> WRITE periodically, run while the server repeatedly reboots
>> >>>>>>>>>>>>>> (or one cluster head fails over to the other and back). The
>> >>>>>>>>>>>>>> goal of the reproducer is to identify problems with state
>> >>>>>>>>>>>>>> recovery without a lot of other I/O going on to clutter up
>> >>>>>>>>>>>>>> the network capture.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> In the failing case, sec=krb5 is specified on the mount
>> >>>>>>>>>>>>>> point, and the reproducer is run as root. We've found this
>> >>>>>>>>>>>>>> combination fails with both NFSv4.0 and NFSv4.1.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> At mount time, the client establishes a GSS context for
>> >>>>>>>>>>>>>> lease management operations, which is bound to the client's
>> >>>>>>>>>>>>>> NFS service principal and uses GSS service "integrity."
>> >>>>>>>>>>>>>> Call this GSS context 1.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> When the reproducer starts, a second GSS context is
>> >>>>>>>>>>>>>> established for NFS operations associated with that user.
>> >>>>>>>>>>>>>> Since the reproducer is running as root, this context is
>> >>>>>>>>>>>>>> also bound to the client's NFS service principal, but it
>> >>>>>>>>>>>>>> uses the GSS service "none" (reflecting the explicit
>> >>>>>>>>>>>>>> request for "sec=krb5"). Call this GSS context 2.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> After the server reboots, the client re-establishes a TCP
>> >>>>>>>>>>>>>> connection with the server, and performs a RENEW
>> >>>>>>>>>>>>>> operation using context 1. Thanks to the server reboot,
>> >>>>>>>>>>>>>> contexts 1 and 2 are now stale. The server thus rejects
>> >>>>>>>>>>>>>> the RPC with RPCSEC_GSS_CTXPROBLEM.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> The client performs a GSS_INIT_SEC_CONTEXT via an NFSv4
>> >>>>>>>>>>>>>> NULL operation. Call this GSS context 3.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> Interestingly, the client does not resend the RENEW
>> >>>>>>>>>>>>>> operation at this point (if it did, we wouldn't see this
>> >>>>>>>>>>>>>> problem at all).
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> The client then attempts to resume the reproducer workload.
>> >>>>>>>>>>>>>> It sends an NFSv4 WRITE operation, using the first available
>> >>>>>>>>>>>>>> GSS context in UID 0's credential cache, which is context 3,
>> >>>>>>>>>>>>>> already bound to the client's NFS service principal. But GSS
>> >>>>>>>>>>>>>> service "none" is used for this operation, since it is on
>> >>>>>>>>>>>>>> behalf of the mount where sec=krb5 was specified.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> The RPC is accepted, but the server reports
>> >>>>>>>>>>>>>> NFS4ERR_STALE_STATEID, since it has recently rebooted.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> The client responds by attempting state recovery. The
>> >>>>>>>>>>>>>> first operation it tries is another RENEW. Since this is
>> >>>>>>>>>>>>>> a lease management operation, the client looks in UID 0's
>> >>>>>>>>>>>>>> credential cache again and finds the recently established
>> >>>>>>>>>>>>>> context 3. It tries the RENEW operation using GSS context
>> >>>>>>>>>>>>>> 3 with GSS service "integrity."
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> The server rejects the RENEW RPC with AUTH_FAILED, and
>> >>>>>>>>>>>>>> the client reports that "check lease failed" and
>> >>>>>>>>>>>>>> terminates state recovery.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> The client re-drives the WRITE operation with the stale
>> >>>>>>>>>>>>>> stateid with predictable results. The client again tries
>> >>>>>>>>>>>>>> to recover state by sending a RENEW, and still uses the
>> >>>>>>>>>>>>>> same GSS context 3 with service "integrity" and gets the
>> >>>>>>>>>>>>>> same result. A (perhaps slow-motion) STALE_STATEID loop
>> >>>>>>>>>>>>>> ensues, and the client mount point is deadlocked.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> Your analysis was that because the reproducer is run as
>> >>>>>>>>>>>>>> root, both the reproducer's I/O operations, and lease
>> >>>>>>>>>>>>>> management operations, attempt to use the same GSS context
>> >>>>>>>>>>>>>> in UID 0's credential cache, but each uses different GSS
>> >>>>>>>>>>>>>> services.
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> As RFC2203 states, "In a creation request, the seq_num and service fields are undefined and both must be ignored by the server”
>> >>>>>>>>>>>>> So a context creation request while kicked off by an operation with a service attached (e.g. WRITE uses rpc_gss_svc_none and RENEW uses rpc_gss_svc_integrity), can be used by either service level.
>> >>>>>>>>>>>>> AFAICS a single GSS context could in theory be used for all service levels, but in practice, GSS contexts are restricted to a service level (by client? by server? ) once they are used.
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>>> The key issue seems to be why, when the mount
>> >>>>>>>>>>>>>> is first established, the client is correctly able to
>> >>>>>>>>>>>>>> establish two separate GSS contexts for UID 0; but after
>> >>>>>>>>>>>>>> a server reboot, the client attempts to use the same GSS
>> >>>>>>>>>>>>>> context with two different GSS services.
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> I speculate that it is a race between the WRITE and the RENEW to use the same newly created GSS context that has not been used yet, and so has no assigned service level, and the two requests race to set the service level.
>> >>>>>>>>>>>>
>> >>>>>>>>>>>> I agree with Andy. It must be a tight race.
>> >>>>>>>>>>>
>> >>>>>>>>>>> In one capture I see something like this after
>> >>>>>>>>>>> the server restarts:
>> >>>>>>>>>>>
>> >>>>>>>>>>> SYN
>> >>>>>>>>>>> SYN, ACK
>> >>>>>>>>>>> ACK
>> >>>>>>>>>>> C WRITE
>> >>>>>>>>>>> C SEQUENCE
>> >>>>>>>>>>> R WRITE -> CTX_PROBLEM
>> >>>>>>>>>>> R SEQUENCE -> CTX_PROBLEM
>> >>>>>>>>>>> C NULL (KRB5_AP_REQ)
>> >>>>>>>>>>> R NULL (KRB5_AP_REP)
>> >>>>>>>>>>> C WRITE
>> >>>>>>>>>>> C SEQUENCE
>> >>>>>>>>>>> R WRITE -> NFS4ERR_STALE_STATEID
>> >>>>>>>>>>> R SEQUENCE -> AUTH_FAILED
>> >>>>>>>>>>>
>> >>>>>>>>>>> Andy's theory neatly explains this behavior.
>> >>>>>>>>>>>
>> >>>>>>>>>>>
>> >>>>>>>>>>>> I have tried to reproduce
>> >>>>>>>>>>>> your scenario and in my tests of rebooting the server all recover
>> >>>>>>>>>>>> correctly. In my case, if RENEW was the one hitting the AUTH_ERR then
>> >>>>>>>>>>>> the new context is established and then RENEW using integrity service
>> >>>>>>>>>>>> is retried with the new context which gets ERR_STALE_CLIENTID which
>> >>>>>>>>>>>> then client recovers from. If it's an operation (I have a GETATTR)
>> >>>>>>>>>>>> that gets AUTH_ERR, then it gets new context and is retried using none
>> >>>>>>>>>>>> service. Then RENEW gets its own AUTH_ERR as it uses a different
>> >>>>>>>>>>>> context, a new context is gotten, RENEW is retried over integrity and
>> >>>>>>>>>>>> gets ERR_STALE_CLIENTID which it recovers from.
>> >>>>>>>>>>>
>> >>>>>>>>>>> If one operation is allowed to complete, then
>> >>>>>>>>>>> the other will always recognize that another
>> >>>>>>>>>>> fresh GSS context is needed. If two are sent
>> >>>>>>>>>>> at the same time, they race and one always
>> >>>>>>>>>>> fails.
>> >>>>>>>>>>>
>> >>>>>>>>>>> Helen's test includes a second idle mount point
>> >>>>>>>>>>> (sec=krb5i) and maybe that is needed to trigger
>> >>>>>>>>>>> the race?
>> >>>>>>>>>>
>> >>>>>>>>>> Chuck, any chance to get "rpcdebug -m rpc auth" output during the
>> >>>>>>>>>> failure (gssd optionally) (i realize that it might alter the timings
>> >>>>>>>>>> and not hit the issue but worth a shot)?
>> >>>>>>>>>
>> >>>>>>>>> I'm sure that's fine. An internal tester hit this,
>> >>>>>>>>> not a customer, so I will ask.
>> >>>>>>>>>
>> >>>>>>>>> I agree, though, that timing might be a problem:
>> >>>>>>>>> these systems all have real serial consoles via
>> >>>>>>>>> iLOM, so /v/l/m traffic does bring everything to
>> >>>>>>>>> a standstill.
>> >>>>>>>>>
>> >>>>>>>>> Meanwhile, what's you're opinion about AUTH_FAILED?
>> >>>>>>>>> Should the server return RPCSEC_GSS_CTXPROBLEM
>> >>>>>>>>> in this case instead? If it did, do you think
>> >>>>>>>>> the Linux client would recover by creating a
>> >>>>>>>>> replacement GSS context?
>> >>>>>>>>
>> >>>>>>>> Ah, yes, I equated AUTH_FAILED And AUTH_ERROR in my mind. If client
>> >>>>>>>> receives the reason as AUTH_FAILED as oppose to CTXPROBLEM it will
>> >>>>>>>> fail with EIO error and will not try to create a new GSS context. So
>> >>>>>>>> yes, I believe it would help if the server returns any of the
>> >>>>>>>> following errors:
>> >>>>>>>>              case RPC_AUTH_REJECTEDCRED:
>> >>>>>>>>              case RPC_AUTH_REJECTEDVERF:
>> >>>>>>>>              case RPCSEC_GSS_CREDPROBLEM:
>> >>>>>>>>              case RPCSEC_GSS_CTXPROBLEM:
>> >>>>>>>>
>> >>>>>>>> then the client will recreate the context.
>> >>>>>>>
>> >>>>>>> Also in my testing, I can see that credential cache is per gss flavor.
>> >>>>>>> Just to check, what kernel version is this problem encountered on (I
>> >>>>>>> know you said upstream) but I just want to double check so that I can
>> >>>>>>> look at the correct source code.
>> >>>>>>
>> >>>>>> v4.1.12 (stable) I think.
>> >>>>>
>> >>>>> Also, can you share the network trace?
>> >>>>
>> >>>> Hi Chuck,
>> >>>>
>> >>>> I was finally able to reproduce the condition you were seeing (i.e.,
>> >>>> the use of the same context for different gss services).
>> >>>>
>> >>>> I enabled rpcdebug rpc auth and I can see that the 2nd request ends up
>> >>>> finding a gss_upcall message because it's just matched by the uid.
>> >>>> There is even a comment in auth_gss/auth_gss.c in gss_add_msg() saying
>> >>>> that if there is upcall for an uid then it won't add another upcall.
>> >>>> So I think the decision is made right there to share the same context
>> >>>> no matter the gss service.
>> >>>
>> >>> If I understand correctly, that's just what Andy predicted.
>> >>>
>> >>> That check needs to be changed to allow another upcall to be
>> >>> queued if the UID matches but the GSS service does not.
>> >>
>> >> You should be able to use the same context with different services.
>> >>
>> >> Apologies, I haven't caught up with the whole discussion above, this one
>> >> point just jumped out at me.  If you're trying to request a whole new
>> >> gss context just so you can use, e.g., integrity instead of privacy,
>> >> then something's wrong.
>> >
>> > The client code has separate gss_cred caches (and so separate gss_contex’s) per gss_auth, which is per service. AFAIK the client has always obtained a separate context per service per server. While we can use the same gss context with different services, that is not the design choice.
>>
>> Given the current code, I'd say that it's not clear what the design
>> choice is. The upcall code states that it will not do another upcall
>> for a given UID if another upcall is already made. So it's purely
>> timing luck that we have two different contexts for the mount when
>> sec=krb5 and the default for lease operations is "integrity". And we
>> don't pass the GSS service between the kernel and gssd and thus no way
>> for us to tie the upcall to the service.
>
> Well, I wrote some of the upcall code, and I though (incorrectly, I
> guess) that clients could reuse the same context with multiple services,
> so the confusion may just be my fault.  So you think the only fix here
> is to key the upcalls on (mechanism, service, uid) instead of just
> (mechanism, uid)?

Right now I'm going to check on the 'calling to gssd' if the existing
upcall has the same UIDs but not the same gss service, then still
place another upcall. But the lookup from the gssd is just based on
the UID caz that's all we have. Maybe that'll work?

>
> --b.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-08-03 20:06                             ` J. Bruce Fields
  2016-08-03 20:11                               ` Olga Kornievskaia
@ 2016-08-03 20:18                               ` Adamson, Andy
  2016-08-03 20:33                                 ` Trond Myklebust
  1 sibling, 1 reply; 25+ messages in thread
From: Adamson, Andy @ 2016-08-03 20:18 UTC (permalink / raw)
  To: J. Bruce Fields
  Cc: Olga Kornievskaia, Adamson, Andy, Chuck Lever, Linux NFS Mailing List

DQo+IE9uIEF1ZyAzLCAyMDE2LCBhdCA0OjA2IFBNLCBKLiBCcnVjZSBGaWVsZHMgPGJmaWVsZHNA
ZmllbGRzZXMub3JnPiB3cm90ZToNCj4gDQo+IE9uIFdlZCwgQXVnIDAzLCAyMDE2IGF0IDAzOjU2
OjMxUE0gLTA0MDAsIE9sZ2EgS29ybmlldnNrYWlhIHdyb3RlOg0KPj4gT24gV2VkLCBBdWcgMywg
MjAxNiBhdCAyOjUzIFBNLCBBZGFtc29uLCBBbmR5DQo+PiA8V2lsbGlhbS5BZGFtc29uQG5ldGFw
cC5jb20+IHdyb3RlOg0KPj4+IA0KPj4+PiBPbiBBdWcgMiwgMjAxNiwgYXQgMjowNiBQTSwgSi4g
QnJ1Y2UgRmllbGRzIDxiZmllbGRzQGZpZWxkc2VzLm9yZz4gd3JvdGU6DQo+Pj4+IA0KPj4+PiBP
biBGcmksIEp1bCAyOSwgMjAxNiBhdCAxMjozODozNFBNIC0wNDAwLCBDaHVjayBMZXZlciB3cm90
ZToNCj4+Pj4+IA0KPj4+Pj4+IE9uIEp1bCAyOSwgMjAxNiwgYXQgMTI6MjcgUE0sIE9sZ2EgS29y
bmlldnNrYWlhIDxhZ2xvQHVtaWNoLmVkdT4gd3JvdGU6DQo+Pj4+Pj4gDQo+Pj4+Pj4gT24gTW9u
LCBKdWwgMjUsIDIwMTYgYXQgMjoxOCBQTSwgT2xnYSBLb3JuaWV2c2thaWEgPGFnbG9AdW1pY2gu
ZWR1PiB3cm90ZToNCj4+Pj4+Pj4gT24gVGh1LCBKdWwgMjEsIDIwMTYgYXQgNTozMiBQTSwgQ2h1
Y2sgTGV2ZXIgPGNodWNrLmxldmVyQG9yYWNsZS5jb20+IHdyb3RlOg0KPj4+Pj4+Pj4gDQo+Pj4+
Pj4+PiANCj4+Pj4+Pj4+PiBPbiBKdWwgMjEsIDIwMTYsIGF0IDEwOjQ2IFBNLCBPbGdhIEtvcm5p
ZXZza2FpYSA8YWdsb0B1bWljaC5lZHU+IHdyb3RlOg0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBP
biBUaHUsIEp1bCAyMSwgMjAxNiBhdCAzOjU0IFBNLCBPbGdhIEtvcm5pZXZza2FpYSA8YWdsb0B1
bWljaC5lZHU+IHdyb3RlOg0KPj4+Pj4+Pj4+Pj4gT24gVGh1LCBKdWwgMjEsIDIwMTYgYXQgMTo1
NiBQTSwgQ2h1Y2sgTGV2ZXIgPGNodWNrLmxldmVyQG9yYWNsZS5jb20+IHdyb3RlOg0KPj4+Pj4+
Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4gT24gSnVsIDIxLCAyMDE2LCBhdCA2OjA0IFBNLCBPbGdhIEtv
cm5pZXZza2FpYSA8YWdsb0B1bWljaC5lZHU+IHdyb3RlOg0KPj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+
Pj4+Pj4+IE9uIFRodSwgSnVsIDIxLCAyMDE2IGF0IDI6NTUgQU0sIENodWNrIExldmVyIDxjaHVj
ay5sZXZlckBvcmFjbGUuY29tPiB3cm90ZToNCj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+
PiBPbiBKdWwgMjAsIDIwMTYsIGF0IDY6NTYgUE0sIE9sZ2EgS29ybmlldnNrYWlhIDxhZ2xvQHVt
aWNoLmVkdT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+IE9uIFdlZCwg
SnVsIDIwLCAyMDE2IGF0IDU6MTQgQU0sIEFkYW1zb24sIEFuZHkNCj4+Pj4+Pj4+Pj4+Pj4+IDxX
aWxsaWFtLkFkYW1zb25AbmV0YXBwLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+
Pj4+Pj4+Pj4+Pj4+IE9uIEp1bCAxOSwgMjAxNiwgYXQgMTA6NTEgQU0sIENodWNrIExldmVyIDxj
aHVjay5sZXZlckBvcmFjbGUuY29tPiB3cm90ZToNCj4+Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+
Pj4+Pj4+Pj4+IEhpIEFuZHktDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+Pj4+PiBU
aGFua3MgZm9yIHRha2luZyB0aGUgdGltZSB0byBkaXNjdXNzIHRoaXMgd2l0aCBtZS4gSSd2ZQ0K
Pj4+Pj4+Pj4+Pj4+Pj4+PiBjb3BpZWQgbGludXgtbmZzIHRvIG1ha2UgdGhpcyBlLW1haWwgYWxz
byBhbiB1cHN0cmVhbSBidWcNCj4+Pj4+Pj4+Pj4+Pj4+Pj4gcmVwb3J0Lg0KPj4+Pj4+Pj4+Pj4+
Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4gQXMgd2Ugc2F3IGluIHRoZSBuZXR3b3JrIGNhcHR1cmUs
IHJlY292ZXJ5IG9mIEdTUyBjb250ZXh0cw0KPj4+Pj4+Pj4+Pj4+Pj4+PiBhZnRlciBhIHNlcnZl
ciByZWJvb3QgZmFpbHMgaW4gY2VydGFpbiBjYXNlcyB3aXRoIE5GU3Y0LjANCj4+Pj4+Pj4+Pj4+
Pj4+Pj4gYW5kIE5GU3Y0LjEgbW91bnQgcG9pbnRzLg0KPj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+
Pj4+Pj4+Pj4+Pj4gVGhlIHJlcHJvZHVjZXIgaXMgYSBzaW1wbGUgcHJvZ3JhbSB0aGF0IGdlbmVy
YXRlcyBvbmUgTkZTDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IFdSSVRFIHBlcmlvZGljYWxseSwgcnVuIHdo
aWxlIHRoZSBzZXJ2ZXIgcmVwZWF0ZWRseSByZWJvb3RzDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IChvciBv
bmUgY2x1c3RlciBoZWFkIGZhaWxzIG92ZXIgdG8gdGhlIG90aGVyIGFuZCBiYWNrKS4gVGhlDQo+
Pj4+Pj4+Pj4+Pj4+Pj4+IGdvYWwgb2YgdGhlIHJlcHJvZHVjZXIgaXMgdG8gaWRlbnRpZnkgcHJv
YmxlbXMgd2l0aCBzdGF0ZQ0KPj4+Pj4+Pj4+Pj4+Pj4+PiByZWNvdmVyeSB3aXRob3V0IGEgbG90
IG9mIG90aGVyIEkvTyBnb2luZyBvbiB0byBjbHV0dGVyIHVwDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IHRo
ZSBuZXR3b3JrIGNhcHR1cmUuDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+Pj4+PiBJ
biB0aGUgZmFpbGluZyBjYXNlLCBzZWM9a3JiNSBpcyBzcGVjaWZpZWQgb24gdGhlIG1vdW50DQo+
Pj4+Pj4+Pj4+Pj4+Pj4+IHBvaW50LCBhbmQgdGhlIHJlcHJvZHVjZXIgaXMgcnVuIGFzIHJvb3Qu
IFdlJ3ZlIGZvdW5kIHRoaXMNCj4+Pj4+Pj4+Pj4+Pj4+Pj4gY29tYmluYXRpb24gZmFpbHMgd2l0
aCBib3RoIE5GU3Y0LjAgYW5kIE5GU3Y0LjEuDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+
Pj4+Pj4+PiBBdCBtb3VudCB0aW1lLCB0aGUgY2xpZW50IGVzdGFibGlzaGVzIGEgR1NTIGNvbnRl
eHQgZm9yDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IGxlYXNlIG1hbmFnZW1lbnQgb3BlcmF0aW9ucywgd2hp
Y2ggaXMgYm91bmQgdG8gdGhlIGNsaWVudCdzDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IE5GUyBzZXJ2aWNl
IHByaW5jaXBhbCBhbmQgdXNlcyBHU1Mgc2VydmljZSAiaW50ZWdyaXR5LiINCj4+Pj4+Pj4+Pj4+
Pj4+Pj4gQ2FsbCB0aGlzIEdTUyBjb250ZXh0IDEuDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+
Pj4+Pj4+Pj4+PiBXaGVuIHRoZSByZXByb2R1Y2VyIHN0YXJ0cywgYSBzZWNvbmQgR1NTIGNvbnRl
eHQgaXMNCj4+Pj4+Pj4+Pj4+Pj4+Pj4gZXN0YWJsaXNoZWQgZm9yIE5GUyBvcGVyYXRpb25zIGFz
c29jaWF0ZWQgd2l0aCB0aGF0IHVzZXIuDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IFNpbmNlIHRoZSByZXBy
b2R1Y2VyIGlzIHJ1bm5pbmcgYXMgcm9vdCwgdGhpcyBjb250ZXh0IGlzDQo+Pj4+Pj4+Pj4+Pj4+
Pj4+IGFsc28gYm91bmQgdG8gdGhlIGNsaWVudCdzIE5GUyBzZXJ2aWNlIHByaW5jaXBhbCwgYnV0
IGl0DQo+Pj4+Pj4+Pj4+Pj4+Pj4+IHVzZXMgdGhlIEdTUyBzZXJ2aWNlICJub25lIiAocmVmbGVj
dGluZyB0aGUgZXhwbGljaXQNCj4+Pj4+Pj4+Pj4+Pj4+Pj4gcmVxdWVzdCBmb3IgInNlYz1rcmI1
IikuIENhbGwgdGhpcyBHU1MgY29udGV4dCAyLg0KPj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+
Pj4+Pj4+Pj4gQWZ0ZXIgdGhlIHNlcnZlciByZWJvb3RzLCB0aGUgY2xpZW50IHJlLWVzdGFibGlz
aGVzIGEgVENQDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IGNvbm5lY3Rpb24gd2l0aCB0aGUgc2VydmVyLCBh
bmQgcGVyZm9ybXMgYSBSRU5FVw0KPj4+Pj4+Pj4+Pj4+Pj4+PiBvcGVyYXRpb24gdXNpbmcgY29u
dGV4dCAxLiBUaGFua3MgdG8gdGhlIHNlcnZlciByZWJvb3QsDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IGNv
bnRleHRzIDEgYW5kIDIgYXJlIG5vdyBzdGFsZS4gVGhlIHNlcnZlciB0aHVzIHJlamVjdHMNCj4+
Pj4+Pj4+Pj4+Pj4+Pj4gdGhlIFJQQyB3aXRoIFJQQ1NFQ19HU1NfQ1RYUFJPQkxFTS4NCj4+Pj4+
Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IFRoZSBjbGllbnQgcGVyZm9ybXMgYSBHU1Nf
SU5JVF9TRUNfQ09OVEVYVCB2aWEgYW4gTkZTdjQNCj4+Pj4+Pj4+Pj4+Pj4+Pj4gTlVMTCBvcGVy
YXRpb24uIENhbGwgdGhpcyBHU1MgY29udGV4dCAzLg0KPj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+
Pj4+Pj4+Pj4+Pj4gSW50ZXJlc3RpbmdseSwgdGhlIGNsaWVudCBkb2VzIG5vdCByZXNlbmQgdGhl
IFJFTkVXDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IG9wZXJhdGlvbiBhdCB0aGlzIHBvaW50IChpZiBpdCBk
aWQsIHdlIHdvdWxkbid0IHNlZSB0aGlzDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IHByb2JsZW0gYXQgYWxs
KS4NCj4+Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IFRoZSBjbGllbnQgdGhlbiBh
dHRlbXB0cyB0byByZXN1bWUgdGhlIHJlcHJvZHVjZXIgd29ya2xvYWQuDQo+Pj4+Pj4+Pj4+Pj4+
Pj4+IEl0IHNlbmRzIGFuIE5GU3Y0IFdSSVRFIG9wZXJhdGlvbiwgdXNpbmcgdGhlIGZpcnN0IGF2
YWlsYWJsZQ0KPj4+Pj4+Pj4+Pj4+Pj4+PiBHU1MgY29udGV4dCBpbiBVSUQgMCdzIGNyZWRlbnRp
YWwgY2FjaGUsIHdoaWNoIGlzIGNvbnRleHQgMywNCj4+Pj4+Pj4+Pj4+Pj4+Pj4gYWxyZWFkeSBi
b3VuZCB0byB0aGUgY2xpZW50J3MgTkZTIHNlcnZpY2UgcHJpbmNpcGFsLiBCdXQgR1NTDQo+Pj4+
Pj4+Pj4+Pj4+Pj4+IHNlcnZpY2UgIm5vbmUiIGlzIHVzZWQgZm9yIHRoaXMgb3BlcmF0aW9uLCBz
aW5jZSBpdCBpcyBvbg0KPj4+Pj4+Pj4+Pj4+Pj4+PiBiZWhhbGYgb2YgdGhlIG1vdW50IHdoZXJl
IHNlYz1rcmI1IHdhcyBzcGVjaWZpZWQuDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+
Pj4+PiBUaGUgUlBDIGlzIGFjY2VwdGVkLCBidXQgdGhlIHNlcnZlciByZXBvcnRzDQo+Pj4+Pj4+
Pj4+Pj4+Pj4+IE5GUzRFUlJfU1RBTEVfU1RBVEVJRCwgc2luY2UgaXQgaGFzIHJlY2VudGx5IHJl
Ym9vdGVkLg0KPj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4gVGhlIGNsaWVudCBy
ZXNwb25kcyBieSBhdHRlbXB0aW5nIHN0YXRlIHJlY292ZXJ5LiBUaGUNCj4+Pj4+Pj4+Pj4+Pj4+
Pj4gZmlyc3Qgb3BlcmF0aW9uIGl0IHRyaWVzIGlzIGFub3RoZXIgUkVORVcuIFNpbmNlIHRoaXMg
aXMNCj4+Pj4+Pj4+Pj4+Pj4+Pj4gYSBsZWFzZSBtYW5hZ2VtZW50IG9wZXJhdGlvbiwgdGhlIGNs
aWVudCBsb29rcyBpbiBVSUQgMCdzDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IGNyZWRlbnRpYWwgY2FjaGUg
YWdhaW4gYW5kIGZpbmRzIHRoZSByZWNlbnRseSBlc3RhYmxpc2hlZA0KPj4+Pj4+Pj4+Pj4+Pj4+
PiBjb250ZXh0IDMuIEl0IHRyaWVzIHRoZSBSRU5FVyBvcGVyYXRpb24gdXNpbmcgR1NTIGNvbnRl
eHQNCj4+Pj4+Pj4+Pj4+Pj4+Pj4gMyB3aXRoIEdTUyBzZXJ2aWNlICJpbnRlZ3JpdHkuIg0KPj4+
Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4gVGhlIHNlcnZlciByZWplY3RzIHRoZSBS
RU5FVyBSUEMgd2l0aCBBVVRIX0ZBSUxFRCwgYW5kDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IHRoZSBjbGll
bnQgcmVwb3J0cyB0aGF0ICJjaGVjayBsZWFzZSBmYWlsZWQiIGFuZA0KPj4+Pj4+Pj4+Pj4+Pj4+
PiB0ZXJtaW5hdGVzIHN0YXRlIHJlY292ZXJ5Lg0KPj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+
Pj4+Pj4+Pj4gVGhlIGNsaWVudCByZS1kcml2ZXMgdGhlIFdSSVRFIG9wZXJhdGlvbiB3aXRoIHRo
ZSBzdGFsZQ0KPj4+Pj4+Pj4+Pj4+Pj4+PiBzdGF0ZWlkIHdpdGggcHJlZGljdGFibGUgcmVzdWx0
cy4gVGhlIGNsaWVudCBhZ2FpbiB0cmllcw0KPj4+Pj4+Pj4+Pj4+Pj4+PiB0byByZWNvdmVyIHN0
YXRlIGJ5IHNlbmRpbmcgYSBSRU5FVywgYW5kIHN0aWxsIHVzZXMgdGhlDQo+Pj4+Pj4+Pj4+Pj4+
Pj4+IHNhbWUgR1NTIGNvbnRleHQgMyB3aXRoIHNlcnZpY2UgImludGVncml0eSIgYW5kIGdldHMg
dGhlDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IHNhbWUgcmVzdWx0LiBBIChwZXJoYXBzIHNsb3ctbW90aW9u
KSBTVEFMRV9TVEFURUlEIGxvb3ANCj4+Pj4+Pj4+Pj4+Pj4+Pj4gZW5zdWVzLCBhbmQgdGhlIGNs
aWVudCBtb3VudCBwb2ludCBpcyBkZWFkbG9ja2VkLg0KPj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+
Pj4+Pj4+Pj4+Pj4gWW91ciBhbmFseXNpcyB3YXMgdGhhdCBiZWNhdXNlIHRoZSByZXByb2R1Y2Vy
IGlzIHJ1biBhcw0KPj4+Pj4+Pj4+Pj4+Pj4+PiByb290LCBib3RoIHRoZSByZXByb2R1Y2VyJ3Mg
SS9PIG9wZXJhdGlvbnMsIGFuZCBsZWFzZQ0KPj4+Pj4+Pj4+Pj4+Pj4+PiBtYW5hZ2VtZW50IG9w
ZXJhdGlvbnMsIGF0dGVtcHQgdG8gdXNlIHRoZSBzYW1lIEdTUyBjb250ZXh0DQo+Pj4+Pj4+Pj4+
Pj4+Pj4+IGluIFVJRCAwJ3MgY3JlZGVudGlhbCBjYWNoZSwgYnV0IGVhY2ggdXNlcyBkaWZmZXJl
bnQgR1NTDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IHNlcnZpY2VzLg0KPj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+
Pj4+Pj4+Pj4+Pj4+IEFzIFJGQzIyMDMgc3RhdGVzLCAiSW4gYSBjcmVhdGlvbiByZXF1ZXN0LCB0
aGUgc2VxX251bSBhbmQgc2VydmljZSBmaWVsZHMgYXJlIHVuZGVmaW5lZCBhbmQgYm90aCBtdXN0
IGJlIGlnbm9yZWQgYnkgdGhlIHNlcnZlcuKAnQ0KPj4+Pj4+Pj4+Pj4+Pj4+IFNvIGEgY29udGV4
dCBjcmVhdGlvbiByZXF1ZXN0IHdoaWxlIGtpY2tlZCBvZmYgYnkgYW4gb3BlcmF0aW9uIHdpdGgg
YSBzZXJ2aWNlIGF0dGFjaGVkIChlLmcuIFdSSVRFIHVzZXMgcnBjX2dzc19zdmNfbm9uZSBhbmQg
UkVORVcgdXNlcyBycGNfZ3NzX3N2Y19pbnRlZ3JpdHkpLCBjYW4gYmUgdXNlZCBieSBlaXRoZXIg
c2VydmljZSBsZXZlbC4NCj4+Pj4+Pj4+Pj4+Pj4+PiBBRkFJQ1MgYSBzaW5nbGUgR1NTIGNvbnRl
eHQgY291bGQgaW4gdGhlb3J5IGJlIHVzZWQgZm9yIGFsbCBzZXJ2aWNlIGxldmVscywgYnV0IGlu
IHByYWN0aWNlLCBHU1MgY29udGV4dHMgYXJlIHJlc3RyaWN0ZWQgdG8gYSBzZXJ2aWNlIGxldmVs
IChieSBjbGllbnQ/IGJ5IHNlcnZlcj8gKSBvbmNlIHRoZXkgYXJlIHVzZWQuDQo+Pj4+Pj4+Pj4+
Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IFRoZSBrZXkgaXNzdWUg
c2VlbXMgdG8gYmUgd2h5LCB3aGVuIHRoZSBtb3VudA0KPj4+Pj4+Pj4+Pj4+Pj4+PiBpcyBmaXJz
dCBlc3RhYmxpc2hlZCwgdGhlIGNsaWVudCBpcyBjb3JyZWN0bHkgYWJsZSB0bw0KPj4+Pj4+Pj4+
Pj4+Pj4+PiBlc3RhYmxpc2ggdHdvIHNlcGFyYXRlIEdTUyBjb250ZXh0cyBmb3IgVUlEIDA7IGJ1
dCBhZnRlcg0KPj4+Pj4+Pj4+Pj4+Pj4+PiBhIHNlcnZlciByZWJvb3QsIHRoZSBjbGllbnQgYXR0
ZW1wdHMgdG8gdXNlIHRoZSBzYW1lIEdTUw0KPj4+Pj4+Pj4+Pj4+Pj4+PiBjb250ZXh0IHdpdGgg
dHdvIGRpZmZlcmVudCBHU1Mgc2VydmljZXMuDQo+Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+
Pj4+Pj4gSSBzcGVjdWxhdGUgdGhhdCBpdCBpcyBhIHJhY2UgYmV0d2VlbiB0aGUgV1JJVEUgYW5k
IHRoZSBSRU5FVyB0byB1c2UgdGhlIHNhbWUgbmV3bHkgY3JlYXRlZCBHU1MgY29udGV4dCB0aGF0
IGhhcyBub3QgYmVlbiB1c2VkIHlldCwgYW5kIHNvIGhhcyBubyBhc3NpZ25lZCBzZXJ2aWNlIGxl
dmVsLCBhbmQgdGhlIHR3byByZXF1ZXN0cyByYWNlIHRvIHNldCB0aGUgc2VydmljZSBsZXZlbC4N
Cj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+Pj4gSSBhZ3JlZSB3aXRoIEFuZHkuIEl0IG11
c3QgYmUgYSB0aWdodCByYWNlLg0KPj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4gSW4gb25l
IGNhcHR1cmUgSSBzZWUgc29tZXRoaW5nIGxpa2UgdGhpcyBhZnRlcg0KPj4+Pj4+Pj4+Pj4+PiB0
aGUgc2VydmVyIHJlc3RhcnRzOg0KPj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4gU1lODQo+
Pj4+Pj4+Pj4+Pj4+IFNZTiwgQUNLDQo+Pj4+Pj4+Pj4+Pj4+IEFDSw0KPj4+Pj4+Pj4+Pj4+PiBD
IFdSSVRFDQo+Pj4+Pj4+Pj4+Pj4+IEMgU0VRVUVOQ0UNCj4+Pj4+Pj4+Pj4+Pj4gUiBXUklURSAt
PiBDVFhfUFJPQkxFTQ0KPj4+Pj4+Pj4+Pj4+PiBSIFNFUVVFTkNFIC0+IENUWF9QUk9CTEVNDQo+
Pj4+Pj4+Pj4+Pj4+IEMgTlVMTCAoS1JCNV9BUF9SRVEpDQo+Pj4+Pj4+Pj4+Pj4+IFIgTlVMTCAo
S1JCNV9BUF9SRVApDQo+Pj4+Pj4+Pj4+Pj4+IEMgV1JJVEUNCj4+Pj4+Pj4+Pj4+Pj4gQyBTRVFV
RU5DRQ0KPj4+Pj4+Pj4+Pj4+PiBSIFdSSVRFIC0+IE5GUzRFUlJfU1RBTEVfU1RBVEVJRA0KPj4+
Pj4+Pj4+Pj4+PiBSIFNFUVVFTkNFIC0+IEFVVEhfRkFJTEVEDQo+Pj4+Pj4+Pj4+Pj4+IA0KPj4+
Pj4+Pj4+Pj4+PiBBbmR5J3MgdGhlb3J5IG5lYXRseSBleHBsYWlucyB0aGlzIGJlaGF2aW9yLg0K
Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+PiBJIGhhdmUgdHJp
ZWQgdG8gcmVwcm9kdWNlDQo+Pj4+Pj4+Pj4+Pj4+PiB5b3VyIHNjZW5hcmlvIGFuZCBpbiBteSB0
ZXN0cyBvZiByZWJvb3RpbmcgdGhlIHNlcnZlciBhbGwgcmVjb3Zlcg0KPj4+Pj4+Pj4+Pj4+Pj4g
Y29ycmVjdGx5LiBJbiBteSBjYXNlLCBpZiBSRU5FVyB3YXMgdGhlIG9uZSBoaXR0aW5nIHRoZSBB
VVRIX0VSUiB0aGVuDQo+Pj4+Pj4+Pj4+Pj4+PiB0aGUgbmV3IGNvbnRleHQgaXMgZXN0YWJsaXNo
ZWQgYW5kIHRoZW4gUkVORVcgdXNpbmcgaW50ZWdyaXR5IHNlcnZpY2UNCj4+Pj4+Pj4+Pj4+Pj4+
IGlzIHJldHJpZWQgd2l0aCB0aGUgbmV3IGNvbnRleHQgd2hpY2ggZ2V0cyBFUlJfU1RBTEVfQ0xJ
RU5USUQgd2hpY2gNCj4+Pj4+Pj4+Pj4+Pj4+IHRoZW4gY2xpZW50IHJlY292ZXJzIGZyb20uIElm
IGl0J3MgYW4gb3BlcmF0aW9uIChJIGhhdmUgYSBHRVRBVFRSKQ0KPj4+Pj4+Pj4+Pj4+Pj4gdGhh
dCBnZXRzIEFVVEhfRVJSLCB0aGVuIGl0IGdldHMgbmV3IGNvbnRleHQgYW5kIGlzIHJldHJpZWQg
dXNpbmcgbm9uZQ0KPj4+Pj4+Pj4+Pj4+Pj4gc2VydmljZS4gVGhlbiBSRU5FVyBnZXRzIGl0cyBv
d24gQVVUSF9FUlIgYXMgaXQgdXNlcyBhIGRpZmZlcmVudA0KPj4+Pj4+Pj4+Pj4+Pj4gY29udGV4
dCwgYSBuZXcgY29udGV4dCBpcyBnb3R0ZW4sIFJFTkVXIGlzIHJldHJpZWQgb3ZlciBpbnRlZ3Jp
dHkgYW5kDQo+Pj4+Pj4+Pj4+Pj4+PiBnZXRzIEVSUl9TVEFMRV9DTElFTlRJRCB3aGljaCBpdCBy
ZWNvdmVycyBmcm9tLg0KPj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4gSWYgb25lIG9wZXJh
dGlvbiBpcyBhbGxvd2VkIHRvIGNvbXBsZXRlLCB0aGVuDQo+Pj4+Pj4+Pj4+Pj4+IHRoZSBvdGhl
ciB3aWxsIGFsd2F5cyByZWNvZ25pemUgdGhhdCBhbm90aGVyDQo+Pj4+Pj4+Pj4+Pj4+IGZyZXNo
IEdTUyBjb250ZXh0IGlzIG5lZWRlZC4gSWYgdHdvIGFyZSBzZW50DQo+Pj4+Pj4+Pj4+Pj4+IGF0
IHRoZSBzYW1lIHRpbWUsIHRoZXkgcmFjZSBhbmQgb25lIGFsd2F5cw0KPj4+Pj4+Pj4+Pj4+PiBm
YWlscy4NCj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+IEhlbGVuJ3MgdGVzdCBpbmNsdWRl
cyBhIHNlY29uZCBpZGxlIG1vdW50IHBvaW50DQo+Pj4+Pj4+Pj4+Pj4+IChzZWM9a3JiNWkpIGFu
ZCBtYXliZSB0aGF0IGlzIG5lZWRlZCB0byB0cmlnZ2VyDQo+Pj4+Pj4+Pj4+Pj4+IHRoZSByYWNl
Pw0KPj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+IENodWNrLCBhbnkgY2hhbmNlIHRvIGdldCAi
cnBjZGVidWcgLW0gcnBjIGF1dGgiIG91dHB1dCBkdXJpbmcgdGhlDQo+Pj4+Pj4+Pj4+Pj4gZmFp
bHVyZSAoZ3NzZCBvcHRpb25hbGx5KSAoaSByZWFsaXplIHRoYXQgaXQgbWlnaHQgYWx0ZXIgdGhl
IHRpbWluZ3MNCj4+Pj4+Pj4+Pj4+PiBhbmQgbm90IGhpdCB0aGUgaXNzdWUgYnV0IHdvcnRoIGEg
c2hvdCk/DQo+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+IEknbSBzdXJlIHRoYXQncyBmaW5lLiBB
biBpbnRlcm5hbCB0ZXN0ZXIgaGl0IHRoaXMsDQo+Pj4+Pj4+Pj4+PiBub3QgYSBjdXN0b21lciwg
c28gSSB3aWxsIGFzay4NCj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4gSSBhZ3JlZSwgdGhvdWdo
LCB0aGF0IHRpbWluZyBtaWdodCBiZSBhIHByb2JsZW06DQo+Pj4+Pj4+Pj4+PiB0aGVzZSBzeXN0
ZW1zIGFsbCBoYXZlIHJlYWwgc2VyaWFsIGNvbnNvbGVzIHZpYQ0KPj4+Pj4+Pj4+Pj4gaUxPTSwg
c28gL3YvbC9tIHRyYWZmaWMgZG9lcyBicmluZyBldmVyeXRoaW5nIHRvDQo+Pj4+Pj4+Pj4+PiBh
IHN0YW5kc3RpbGwuDQo+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+IE1lYW53aGlsZSwgd2hhdCdz
IHlvdSdyZSBvcGluaW9uIGFib3V0IEFVVEhfRkFJTEVEPw0KPj4+Pj4+Pj4+Pj4gU2hvdWxkIHRo
ZSBzZXJ2ZXIgcmV0dXJuIFJQQ1NFQ19HU1NfQ1RYUFJPQkxFTQ0KPj4+Pj4+Pj4+Pj4gaW4gdGhp
cyBjYXNlIGluc3RlYWQ/IElmIGl0IGRpZCwgZG8geW91IHRoaW5rDQo+Pj4+Pj4+Pj4+PiB0aGUg
TGludXggY2xpZW50IHdvdWxkIHJlY292ZXIgYnkgY3JlYXRpbmcgYQ0KPj4+Pj4+Pj4+Pj4gcmVw
bGFjZW1lbnQgR1NTIGNvbnRleHQ/DQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBBaCwgeWVzLCBJ
IGVxdWF0ZWQgQVVUSF9GQUlMRUQgQW5kIEFVVEhfRVJST1IgaW4gbXkgbWluZC4gSWYgY2xpZW50
DQo+Pj4+Pj4+Pj4+IHJlY2VpdmVzIHRoZSByZWFzb24gYXMgQVVUSF9GQUlMRUQgYXMgb3Bwb3Nl
IHRvIENUWFBST0JMRU0gaXQgd2lsbA0KPj4+Pj4+Pj4+PiBmYWlsIHdpdGggRUlPIGVycm9yIGFu
ZCB3aWxsIG5vdCB0cnkgdG8gY3JlYXRlIGEgbmV3IEdTUyBjb250ZXh0LiBTbw0KPj4+Pj4+Pj4+
PiB5ZXMsIEkgYmVsaWV2ZSBpdCB3b3VsZCBoZWxwIGlmIHRoZSBzZXJ2ZXIgcmV0dXJucyBhbnkg
b2YgdGhlDQo+Pj4+Pj4+Pj4+IGZvbGxvd2luZyBlcnJvcnM6DQo+Pj4+Pj4+Pj4+ICAgICAgICAg
ICAgIGNhc2UgUlBDX0FVVEhfUkVKRUNURURDUkVEOg0KPj4+Pj4+Pj4+PiAgICAgICAgICAgICBj
YXNlIFJQQ19BVVRIX1JFSkVDVEVEVkVSRjoNCj4+Pj4+Pj4+Pj4gICAgICAgICAgICAgY2FzZSBS
UENTRUNfR1NTX0NSRURQUk9CTEVNOg0KPj4+Pj4+Pj4+PiAgICAgICAgICAgICBjYXNlIFJQQ1NF
Q19HU1NfQ1RYUFJPQkxFTToNCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+IHRoZW4gdGhlIGNsaWVu
dCB3aWxsIHJlY3JlYXRlIHRoZSBjb250ZXh0Lg0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IEFsc28g
aW4gbXkgdGVzdGluZywgSSBjYW4gc2VlIHRoYXQgY3JlZGVudGlhbCBjYWNoZSBpcyBwZXIgZ3Nz
IGZsYXZvci4NCj4+Pj4+Pj4+PiBKdXN0IHRvIGNoZWNrLCB3aGF0IGtlcm5lbCB2ZXJzaW9uIGlz
IHRoaXMgcHJvYmxlbSBlbmNvdW50ZXJlZCBvbiAoSQ0KPj4+Pj4+Pj4+IGtub3cgeW91IHNhaWQg
dXBzdHJlYW0pIGJ1dCBJIGp1c3Qgd2FudCB0byBkb3VibGUgY2hlY2sgc28gdGhhdCBJIGNhbg0K
Pj4+Pj4+Pj4+IGxvb2sgYXQgdGhlIGNvcnJlY3Qgc291cmNlIGNvZGUuDQo+Pj4+Pj4+PiANCj4+
Pj4+Pj4+IHY0LjEuMTIgKHN0YWJsZSkgSSB0aGluay4NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IEFsc28s
IGNhbiB5b3Ugc2hhcmUgdGhlIG5ldHdvcmsgdHJhY2U/DQo+Pj4+Pj4gDQo+Pj4+Pj4gSGkgQ2h1
Y2ssDQo+Pj4+Pj4gDQo+Pj4+Pj4gSSB3YXMgZmluYWxseSBhYmxlIHRvIHJlcHJvZHVjZSB0aGUg
Y29uZGl0aW9uIHlvdSB3ZXJlIHNlZWluZyAoaS5lLiwNCj4+Pj4+PiB0aGUgdXNlIG9mIHRoZSBz
YW1lIGNvbnRleHQgZm9yIGRpZmZlcmVudCBnc3Mgc2VydmljZXMpLg0KPj4+Pj4+IA0KPj4+Pj4+
IEkgZW5hYmxlZCBycGNkZWJ1ZyBycGMgYXV0aCBhbmQgSSBjYW4gc2VlIHRoYXQgdGhlIDJuZCBy
ZXF1ZXN0IGVuZHMgdXANCj4+Pj4+PiBmaW5kaW5nIGEgZ3NzX3VwY2FsbCBtZXNzYWdlIGJlY2F1
c2UgaXQncyBqdXN0IG1hdGNoZWQgYnkgdGhlIHVpZC4NCj4+Pj4+PiBUaGVyZSBpcyBldmVuIGEg
Y29tbWVudCBpbiBhdXRoX2dzcy9hdXRoX2dzcy5jIGluIGdzc19hZGRfbXNnKCkgc2F5aW5nDQo+
Pj4+Pj4gdGhhdCBpZiB0aGVyZSBpcyB1cGNhbGwgZm9yIGFuIHVpZCB0aGVuIGl0IHdvbid0IGFk
ZCBhbm90aGVyIHVwY2FsbC4NCj4+Pj4+PiBTbyBJIHRoaW5rIHRoZSBkZWNpc2lvbiBpcyBtYWRl
IHJpZ2h0IHRoZXJlIHRvIHNoYXJlIHRoZSBzYW1lIGNvbnRleHQNCj4+Pj4+PiBubyBtYXR0ZXIg
dGhlIGdzcyBzZXJ2aWNlLg0KPj4+Pj4gDQo+Pj4+PiBJZiBJIHVuZGVyc3RhbmQgY29ycmVjdGx5
LCB0aGF0J3MganVzdCB3aGF0IEFuZHkgcHJlZGljdGVkLg0KPj4+Pj4gDQo+Pj4+PiBUaGF0IGNo
ZWNrIG5lZWRzIHRvIGJlIGNoYW5nZWQgdG8gYWxsb3cgYW5vdGhlciB1cGNhbGwgdG8gYmUNCj4+
Pj4+IHF1ZXVlZCBpZiB0aGUgVUlEIG1hdGNoZXMgYnV0IHRoZSBHU1Mgc2VydmljZSBkb2VzIG5v
dC4NCj4+Pj4gDQo+Pj4+IFlvdSBzaG91bGQgYmUgYWJsZSB0byB1c2UgdGhlIHNhbWUgY29udGV4
dCB3aXRoIGRpZmZlcmVudCBzZXJ2aWNlcy4NCj4+Pj4gDQo+Pj4+IEFwb2xvZ2llcywgSSBoYXZl
bid0IGNhdWdodCB1cCB3aXRoIHRoZSB3aG9sZSBkaXNjdXNzaW9uIGFib3ZlLCB0aGlzIG9uZQ0K
Pj4+PiBwb2ludCBqdXN0IGp1bXBlZCBvdXQgYXQgbWUuICBJZiB5b3UncmUgdHJ5aW5nIHRvIHJl
cXVlc3QgYSB3aG9sZSBuZXcNCj4+Pj4gZ3NzIGNvbnRleHQganVzdCBzbyB5b3UgY2FuIHVzZSwg
ZS5nLiwgaW50ZWdyaXR5IGluc3RlYWQgb2YgcHJpdmFjeSwNCj4+Pj4gdGhlbiBzb21ldGhpbmcn
cyB3cm9uZy4NCj4+PiANCj4+PiBUaGUgY2xpZW50IGNvZGUgaGFzIHNlcGFyYXRlIGdzc19jcmVk
IGNhY2hlcyAoYW5kIHNvIHNlcGFyYXRlIGdzc19jb250ZXjigJlzKSBwZXIgZ3NzX2F1dGgsIHdo
aWNoIGlzIHBlciBzZXJ2aWNlLiBBRkFJSyB0aGUgY2xpZW50IGhhcyBhbHdheXMgb2J0YWluZWQg
YSBzZXBhcmF0ZSBjb250ZXh0IHBlciBzZXJ2aWNlIHBlciBzZXJ2ZXIuIFdoaWxlIHdlIGNhbiB1
c2UgdGhlIHNhbWUgZ3NzIGNvbnRleHQgd2l0aCBkaWZmZXJlbnQgc2VydmljZXMsIHRoYXQgaXMg
bm90IHRoZSBkZXNpZ24gY2hvaWNlLg0KPj4gDQo+PiBHaXZlbiB0aGUgY3VycmVudCBjb2RlLCBJ
J2Qgc2F5IHRoYXQgaXQncyBub3QgY2xlYXIgd2hhdCB0aGUgZGVzaWduDQo+PiBjaG9pY2UgaXMu
IFRoZSB1cGNhbGwgY29kZSBzdGF0ZXMgdGhhdCBpdCB3aWxsIG5vdCBkbyBhbm90aGVyIHVwY2Fs
bA0KPj4gZm9yIGEgZ2l2ZW4gVUlEIGlmIGFub3RoZXIgdXBjYWxsIGlzIGFscmVhZHkgbWFkZS4g
U28gaXQncyBwdXJlbHkNCj4+IHRpbWluZyBsdWNrIHRoYXQgd2UgaGF2ZSB0d28gZGlmZmVyZW50
IGNvbnRleHRzIGZvciB0aGUgbW91bnQgd2hlbg0KPj4gc2VjPWtyYjUgYW5kIHRoZSBkZWZhdWx0
IGZvciBsZWFzZSBvcGVyYXRpb25zIGlzICJpbnRlZ3JpdHkiLiBBbmQgd2UNCj4+IGRvbid0IHBh
c3MgdGhlIEdTUyBzZXJ2aWNlIGJldHdlZW4gdGhlIGtlcm5lbCBhbmQgZ3NzZCBhbmQgdGh1cyBu
byB3YXkNCj4+IGZvciB1cyB0byB0aWUgdGhlIHVwY2FsbCB0byB0aGUgc2VydmljZS4NCj4gDQo+
IFdlbGwsIEkgd3JvdGUgc29tZSBvZiB0aGUgdXBjYWxsIGNvZGUsIGFuZCBJIHRob3VnaCAoaW5j
b3JyZWN0bHksIEkNCj4gZ3Vlc3MpIHRoYXQgY2xpZW50cyBjb3VsZCByZXVzZSB0aGUgc2FtZSBj
b250ZXh0IHdpdGggbXVsdGlwbGUgc2VydmljZXMsDQo+IHNvIHRoZSBjb25mdXNpb24gbWF5IGp1
c3QgYmUgbXkgZmF1bHQuICBTbyB5b3UgdGhpbmsgdGhlIG9ubHkgZml4IGhlcmUNCj4gaXMgdG8g
a2V5IHRoZSB1cGNhbGxzDQoNCk5vIG5lZWQgdG8gY2hhbmdlIHRoZSB1cGNhbGwsIGp1c3QgZG9u
4oCZIHQgbGV0IHRoZSAybmQgcmVxdWVzdCB1c2UgdGhlIG5ldyBkb3duY2FsbCBnc3MgY29udGV4
dCB1bmxlc3MgdGhlIHNlcnZpY2UgYXNzaWduZWQgdG8gaXQgbWF0Y2hlcy4gSSB3b3VsZCB0aGlu
ayB5b3UgY291bGQgc2V0IHRoZSBzZXJ2aWNlIGltbWVkaWF0ZWx5IHVwb24gcmVjZWl2aW5nIHRo
ZSBuZXcgZ3NzIGNvbnRleHQgZG93biBjYWxsIGFuZCB0aGVuIGNoZWNrIGl0IHByaW9yIHRvIHRo
ZSAybmQgcmVxdWVzdCB1c2VzIGl0Lg0KDQrigJQ+QW5keQ0KDQoNCg0KPiBvbiAobWVjaGFuaXNt
LCBzZXJ2aWNlLCB1aWQpIGluc3RlYWQgb2YganVzdA0KPiAobWVjaGFuaXNtLCB1aWQpPw0KPiAN
Cj4gLS1iLg0KDQo=

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-08-03 20:18                               ` Adamson, Andy
@ 2016-08-03 20:33                                 ` Trond Myklebust
  2016-08-03 21:12                                   ` Adamson, Andy
  0 siblings, 1 reply; 25+ messages in thread
From: Trond Myklebust @ 2016-08-03 20:33 UTC (permalink / raw)
  To: Adamson William Andros
  Cc: Fields Bruce James, Kornievskaia Olga, Lever Chuck,
	List Linux NFS Mailing

DQo+IE9uIEF1ZyAzLCAyMDE2LCBhdCAxNjoxOCwgQWRhbXNvbiwgQW5keSA8V2lsbGlhbS5BZGFt
c29uQG5ldGFwcC5jb20+IHdyb3RlOg0KPiANCj4gDQo+PiBPbiBBdWcgMywgMjAxNiwgYXQgNDow
NiBQTSwgSi4gQnJ1Y2UgRmllbGRzIDxiZmllbGRzQGZpZWxkc2VzLm9yZz4gd3JvdGU6DQo+PiAN
Cj4+IE9uIFdlZCwgQXVnIDAzLCAyMDE2IGF0IDAzOjU2OjMxUE0gLTA0MDAsIE9sZ2EgS29ybmll
dnNrYWlhIHdyb3RlOg0KPj4+IE9uIFdlZCwgQXVnIDMsIDIwMTYgYXQgMjo1MyBQTSwgQWRhbXNv
biwgQW5keQ0KPj4+IDxXaWxsaWFtLkFkYW1zb25AbmV0YXBwLmNvbT4gd3JvdGU6DQo+Pj4+IA0K
Pj4+Pj4gT24gQXVnIDIsIDIwMTYsIGF0IDI6MDYgUE0sIEouIEJydWNlIEZpZWxkcyA8YmZpZWxk
c0BmaWVsZHNlcy5vcmc+IHdyb3RlOg0KPj4+Pj4gDQo+Pj4+PiBPbiBGcmksIEp1bCAyOSwgMjAx
NiBhdCAxMjozODozNFBNIC0wNDAwLCBDaHVjayBMZXZlciB3cm90ZToNCj4+Pj4+PiANCj4+Pj4+
Pj4gT24gSnVsIDI5LCAyMDE2LCBhdCAxMjoyNyBQTSwgT2xnYSBLb3JuaWV2c2thaWEgPGFnbG9A
dW1pY2guZWR1PiB3cm90ZToNCj4+Pj4+Pj4gDQo+Pj4+Pj4+IE9uIE1vbiwgSnVsIDI1LCAyMDE2
IGF0IDI6MTggUE0sIE9sZ2EgS29ybmlldnNrYWlhIDxhZ2xvQHVtaWNoLmVkdT4gd3JvdGU6DQo+
Pj4+Pj4+PiBPbiBUaHUsIEp1bCAyMSwgMjAxNiBhdCA1OjMyIFBNLCBDaHVjayBMZXZlciA8Y2h1
Y2subGV2ZXJAb3JhY2xlLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gDQo+Pj4+
Pj4+Pj4+IE9uIEp1bCAyMSwgMjAxNiwgYXQgMTA6NDYgUE0sIE9sZ2EgS29ybmlldnNrYWlhIDxh
Z2xvQHVtaWNoLmVkdT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4gT24gVGh1LCBK
dWwgMjEsIDIwMTYgYXQgMzo1NCBQTSwgT2xnYSBLb3JuaWV2c2thaWEgPGFnbG9AdW1pY2guZWR1
PiB3cm90ZToNCj4+Pj4+Pj4+Pj4+PiBPbiBUaHUsIEp1bCAyMSwgMjAxNiBhdCAxOjU2IFBNLCBD
aHVjayBMZXZlciA8Y2h1Y2subGV2ZXJAb3JhY2xlLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+Pj4g
DQo+Pj4+Pj4+Pj4+Pj4+IE9uIEp1bCAyMSwgMjAxNiwgYXQgNjowNCBQTSwgT2xnYSBLb3JuaWV2
c2thaWEgPGFnbG9AdW1pY2guZWR1PiB3cm90ZToNCj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+
Pj4+IE9uIFRodSwgSnVsIDIxLCAyMDE2IGF0IDI6NTUgQU0sIENodWNrIExldmVyIDxjaHVjay5s
ZXZlckBvcmFjbGUuY29tPiB3cm90ZToNCj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+Pj4+
IE9uIEp1bCAyMCwgMjAxNiwgYXQgNjo1NiBQTSwgT2xnYSBLb3JuaWV2c2thaWEgPGFnbG9AdW1p
Y2guZWR1PiB3cm90ZToNCj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+PiBPbiBXZWQs
IEp1bCAyMCwgMjAxNiBhdCA1OjE0IEFNLCBBZGFtc29uLCBBbmR5DQo+Pj4+Pj4+Pj4+Pj4+Pj4g
PFdpbGxpYW0uQWRhbXNvbkBuZXRhcHAuY29tPiB3cm90ZToNCj4+Pj4+Pj4+Pj4+Pj4+Pj4gDQo+
Pj4+Pj4+Pj4+Pj4+Pj4+PiBPbiBKdWwgMTksIDIwMTYsIGF0IDEwOjUxIEFNLCBDaHVjayBMZXZl
ciA8Y2h1Y2subGV2ZXJAb3JhY2xlLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+
Pj4+Pj4+Pj4+Pj4+Pj4+IEhpIEFuZHktDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+
Pj4+Pj4+IFRoYW5rcyBmb3IgdGFraW5nIHRoZSB0aW1lIHRvIGRpc2N1c3MgdGhpcyB3aXRoIG1l
LiBJJ3ZlDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiBjb3BpZWQgbGludXgtbmZzIHRvIG1ha2UgdGhpcyBl
LW1haWwgYWxzbyBhbiB1cHN0cmVhbSBidWcNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IHJlcG9ydC4NCj4+
Pj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4gQXMgd2Ugc2F3IGluIHRoZSBuZXR3
b3JrIGNhcHR1cmUsIHJlY292ZXJ5IG9mIEdTUyBjb250ZXh0cw0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4g
YWZ0ZXIgYSBzZXJ2ZXIgcmVib290IGZhaWxzIGluIGNlcnRhaW4gY2FzZXMgd2l0aCBORlN2NC4w
DQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiBhbmQgTkZTdjQuMSBtb3VudCBwb2ludHMuDQo+Pj4+Pj4+Pj4+
Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IFRoZSByZXByb2R1Y2VyIGlzIGEgc2ltcGxlIHBy
b2dyYW0gdGhhdCBnZW5lcmF0ZXMgb25lIE5GUw0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4gV1JJVEUgcGVy
aW9kaWNhbGx5LCBydW4gd2hpbGUgdGhlIHNlcnZlciByZXBlYXRlZGx5IHJlYm9vdHMNCj4+Pj4+
Pj4+Pj4+Pj4+Pj4+IChvciBvbmUgY2x1c3RlciBoZWFkIGZhaWxzIG92ZXIgdG8gdGhlIG90aGVy
IGFuZCBiYWNrKS4gVGhlDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiBnb2FsIG9mIHRoZSByZXByb2R1Y2Vy
IGlzIHRvIGlkZW50aWZ5IHByb2JsZW1zIHdpdGggc3RhdGUNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IHJl
Y292ZXJ5IHdpdGhvdXQgYSBsb3Qgb2Ygb3RoZXIgSS9PIGdvaW5nIG9uIHRvIGNsdXR0ZXIgdXAN
Cj4+Pj4+Pj4+Pj4+Pj4+Pj4+IHRoZSBuZXR3b3JrIGNhcHR1cmUuDQo+Pj4+Pj4+Pj4+Pj4+Pj4+
PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IEluIHRoZSBmYWlsaW5nIGNhc2UsIHNlYz1rcmI1IGlzIHNw
ZWNpZmllZCBvbiB0aGUgbW91bnQNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IHBvaW50LCBhbmQgdGhlIHJl
cHJvZHVjZXIgaXMgcnVuIGFzIHJvb3QuIFdlJ3ZlIGZvdW5kIHRoaXMNCj4+Pj4+Pj4+Pj4+Pj4+
Pj4+IGNvbWJpbmF0aW9uIGZhaWxzIHdpdGggYm90aCBORlN2NC4wIGFuZCBORlN2NC4xLg0KPj4+
Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiBBdCBtb3VudCB0aW1lLCB0aGUgY2xp
ZW50IGVzdGFibGlzaGVzIGEgR1NTIGNvbnRleHQgZm9yDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiBsZWFz
ZSBtYW5hZ2VtZW50IG9wZXJhdGlvbnMsIHdoaWNoIGlzIGJvdW5kIHRvIHRoZSBjbGllbnQncw0K
Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gTkZTIHNlcnZpY2UgcHJpbmNpcGFsIGFuZCB1c2VzIEdTUyBzZXJ2
aWNlICJpbnRlZ3JpdHkuIg0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4gQ2FsbCB0aGlzIEdTUyBjb250ZXh0
IDEuDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IFdoZW4gdGhlIHJlcHJv
ZHVjZXIgc3RhcnRzLCBhIHNlY29uZCBHU1MgY29udGV4dCBpcw0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4g
ZXN0YWJsaXNoZWQgZm9yIE5GUyBvcGVyYXRpb25zIGFzc29jaWF0ZWQgd2l0aCB0aGF0IHVzZXIu
DQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiBTaW5jZSB0aGUgcmVwcm9kdWNlciBpcyBydW5uaW5nIGFzIHJv
b3QsIHRoaXMgY29udGV4dCBpcw0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4gYWxzbyBib3VuZCB0byB0aGUg
Y2xpZW50J3MgTkZTIHNlcnZpY2UgcHJpbmNpcGFsLCBidXQgaXQNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+
IHVzZXMgdGhlIEdTUyBzZXJ2aWNlICJub25lIiAocmVmbGVjdGluZyB0aGUgZXhwbGljaXQNCj4+
Pj4+Pj4+Pj4+Pj4+Pj4+IHJlcXVlc3QgZm9yICJzZWM9a3JiNSIpLiBDYWxsIHRoaXMgR1NTIGNv
bnRleHQgMi4NCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4gQWZ0ZXIgdGhl
IHNlcnZlciByZWJvb3RzLCB0aGUgY2xpZW50IHJlLWVzdGFibGlzaGVzIGEgVENQDQo+Pj4+Pj4+
Pj4+Pj4+Pj4+PiBjb25uZWN0aW9uIHdpdGggdGhlIHNlcnZlciwgYW5kIHBlcmZvcm1zIGEgUkVO
RVcNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IG9wZXJhdGlvbiB1c2luZyBjb250ZXh0IDEuIFRoYW5rcyB0
byB0aGUgc2VydmVyIHJlYm9vdCwNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IGNvbnRleHRzIDEgYW5kIDIg
YXJlIG5vdyBzdGFsZS4gVGhlIHNlcnZlciB0aHVzIHJlamVjdHMNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+
IHRoZSBSUEMgd2l0aCBSUENTRUNfR1NTX0NUWFBST0JMRU0uDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+Pj4+Pj4+Pj4+IFRoZSBjbGllbnQgcGVyZm9ybXMgYSBHU1NfSU5JVF9TRUNfQ09O
VEVYVCB2aWEgYW4gTkZTdjQNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IE5VTEwgb3BlcmF0aW9uLiBDYWxs
IHRoaXMgR1NTIGNvbnRleHQgMy4NCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+Pj4+
Pj4gSW50ZXJlc3RpbmdseSwgdGhlIGNsaWVudCBkb2VzIG5vdCByZXNlbmQgdGhlIFJFTkVXDQo+
Pj4+Pj4+Pj4+Pj4+Pj4+PiBvcGVyYXRpb24gYXQgdGhpcyBwb2ludCAoaWYgaXQgZGlkLCB3ZSB3
b3VsZG4ndCBzZWUgdGhpcw0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4gcHJvYmxlbSBhdCBhbGwpLg0KPj4+
Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiBUaGUgY2xpZW50IHRoZW4gYXR0ZW1w
dHMgdG8gcmVzdW1lIHRoZSByZXByb2R1Y2VyIHdvcmtsb2FkLg0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4g
SXQgc2VuZHMgYW4gTkZTdjQgV1JJVEUgb3BlcmF0aW9uLCB1c2luZyB0aGUgZmlyc3QgYXZhaWxh
YmxlDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiBHU1MgY29udGV4dCBpbiBVSUQgMCdzIGNyZWRlbnRpYWwg
Y2FjaGUsIHdoaWNoIGlzIGNvbnRleHQgMywNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IGFscmVhZHkgYm91
bmQgdG8gdGhlIGNsaWVudCdzIE5GUyBzZXJ2aWNlIHByaW5jaXBhbC4gQnV0IEdTUw0KPj4+Pj4+
Pj4+Pj4+Pj4+Pj4gc2VydmljZSAibm9uZSIgaXMgdXNlZCBmb3IgdGhpcyBvcGVyYXRpb24sIHNp
bmNlIGl0IGlzIG9uDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiBiZWhhbGYgb2YgdGhlIG1vdW50IHdoZXJl
IHNlYz1rcmI1IHdhcyBzcGVjaWZpZWQuDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+
Pj4+Pj4+IFRoZSBSUEMgaXMgYWNjZXB0ZWQsIGJ1dCB0aGUgc2VydmVyIHJlcG9ydHMNCj4+Pj4+
Pj4+Pj4+Pj4+Pj4+IE5GUzRFUlJfU1RBTEVfU1RBVEVJRCwgc2luY2UgaXQgaGFzIHJlY2VudGx5
IHJlYm9vdGVkLg0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiBUaGUgY2xp
ZW50IHJlc3BvbmRzIGJ5IGF0dGVtcHRpbmcgc3RhdGUgcmVjb3ZlcnkuIFRoZQ0KPj4+Pj4+Pj4+
Pj4+Pj4+Pj4gZmlyc3Qgb3BlcmF0aW9uIGl0IHRyaWVzIGlzIGFub3RoZXIgUkVORVcuIFNpbmNl
IHRoaXMgaXMNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IGEgbGVhc2UgbWFuYWdlbWVudCBvcGVyYXRpb24s
IHRoZSBjbGllbnQgbG9va3MgaW4gVUlEIDAncw0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4gY3JlZGVudGlh
bCBjYWNoZSBhZ2FpbiBhbmQgZmluZHMgdGhlIHJlY2VudGx5IGVzdGFibGlzaGVkDQo+Pj4+Pj4+
Pj4+Pj4+Pj4+PiBjb250ZXh0IDMuIEl0IHRyaWVzIHRoZSBSRU5FVyBvcGVyYXRpb24gdXNpbmcg
R1NTIGNvbnRleHQNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IDMgd2l0aCBHU1Mgc2VydmljZSAiaW50ZWdy
aXR5LiINCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4gVGhlIHNlcnZlciBy
ZWplY3RzIHRoZSBSRU5FVyBSUEMgd2l0aCBBVVRIX0ZBSUxFRCwgYW5kDQo+Pj4+Pj4+Pj4+Pj4+
Pj4+PiB0aGUgY2xpZW50IHJlcG9ydHMgdGhhdCAiY2hlY2sgbGVhc2UgZmFpbGVkIiBhbmQNCj4+
Pj4+Pj4+Pj4+Pj4+Pj4+IHRlcm1pbmF0ZXMgc3RhdGUgcmVjb3ZlcnkuDQo+Pj4+Pj4+Pj4+Pj4+
Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IFRoZSBjbGllbnQgcmUtZHJpdmVzIHRoZSBXUklURSBv
cGVyYXRpb24gd2l0aCB0aGUgc3RhbGUNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IHN0YXRlaWQgd2l0aCBw
cmVkaWN0YWJsZSByZXN1bHRzLiBUaGUgY2xpZW50IGFnYWluIHRyaWVzDQo+Pj4+Pj4+Pj4+Pj4+
Pj4+PiB0byByZWNvdmVyIHN0YXRlIGJ5IHNlbmRpbmcgYSBSRU5FVywgYW5kIHN0aWxsIHVzZXMg
dGhlDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiBzYW1lIEdTUyBjb250ZXh0IDMgd2l0aCBzZXJ2aWNlICJp
bnRlZ3JpdHkiIGFuZCBnZXRzIHRoZQ0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4gc2FtZSByZXN1bHQuIEEg
KHBlcmhhcHMgc2xvdy1tb3Rpb24pIFNUQUxFX1NUQVRFSUQgbG9vcA0KPj4+Pj4+Pj4+Pj4+Pj4+
Pj4gZW5zdWVzLCBhbmQgdGhlIGNsaWVudCBtb3VudCBwb2ludCBpcyBkZWFkbG9ja2VkLg0KPj4+
Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiBZb3VyIGFuYWx5c2lzIHdhcyB0aGF0
IGJlY2F1c2UgdGhlIHJlcHJvZHVjZXIgaXMgcnVuIGFzDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiByb290
LCBib3RoIHRoZSByZXByb2R1Y2VyJ3MgSS9PIG9wZXJhdGlvbnMsIGFuZCBsZWFzZQ0KPj4+Pj4+
Pj4+Pj4+Pj4+Pj4gbWFuYWdlbWVudCBvcGVyYXRpb25zLCBhdHRlbXB0IHRvIHVzZSB0aGUgc2Ft
ZSBHU1MgY29udGV4dA0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4gaW4gVUlEIDAncyBjcmVkZW50aWFsIGNh
Y2hlLCBidXQgZWFjaCB1c2VzIGRpZmZlcmVudCBHU1MNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IHNlcnZp
Y2VzLg0KPj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4gQXMgUkZDMjIwMyBzdGF0
ZXMsICJJbiBhIGNyZWF0aW9uIHJlcXVlc3QsIHRoZSBzZXFfbnVtIGFuZCBzZXJ2aWNlIGZpZWxk
cyBhcmUgdW5kZWZpbmVkIGFuZCBib3RoIG11c3QgYmUgaWdub3JlZCBieSB0aGUgc2VydmVy4oCd
DQo+Pj4+Pj4+Pj4+Pj4+Pj4+IFNvIGEgY29udGV4dCBjcmVhdGlvbiByZXF1ZXN0IHdoaWxlIGtp
Y2tlZCBvZmYgYnkgYW4gb3BlcmF0aW9uIHdpdGggYSBzZXJ2aWNlIGF0dGFjaGVkIChlLmcuIFdS
SVRFIHVzZXMgcnBjX2dzc19zdmNfbm9uZSBhbmQgUkVORVcgdXNlcyBycGNfZ3NzX3N2Y19pbnRl
Z3JpdHkpLCBjYW4gYmUgdXNlZCBieSBlaXRoZXIgc2VydmljZSBsZXZlbC4NCj4+Pj4+Pj4+Pj4+
Pj4+Pj4gQUZBSUNTIGEgc2luZ2xlIEdTUyBjb250ZXh0IGNvdWxkIGluIHRoZW9yeSBiZSB1c2Vk
IGZvciBhbGwgc2VydmljZSBsZXZlbHMsIGJ1dCBpbiBwcmFjdGljZSwgR1NTIGNvbnRleHRzIGFy
ZSByZXN0cmljdGVkIHRvIGEgc2VydmljZSBsZXZlbCAoYnkgY2xpZW50PyBieSBzZXJ2ZXI/ICkg
b25jZSB0aGV5IGFyZSB1c2VkLg0KPj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4g
DQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiBUaGUga2V5IGlzc3VlIHNlZW1zIHRvIGJlIHdoeSwgd2hlbiB0
aGUgbW91bnQNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IGlzIGZpcnN0IGVzdGFibGlzaGVkLCB0aGUgY2xp
ZW50IGlzIGNvcnJlY3RseSBhYmxlIHRvDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiBlc3RhYmxpc2ggdHdv
IHNlcGFyYXRlIEdTUyBjb250ZXh0cyBmb3IgVUlEIDA7IGJ1dCBhZnRlcg0KPj4+Pj4+Pj4+Pj4+
Pj4+Pj4gYSBzZXJ2ZXIgcmVib290LCB0aGUgY2xpZW50IGF0dGVtcHRzIHRvIHVzZSB0aGUgc2Ft
ZSBHU1MNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IGNvbnRleHQgd2l0aCB0d28gZGlmZmVyZW50IEdTUyBz
ZXJ2aWNlcy4NCj4+Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IEkgc3BlY3VsYXRl
IHRoYXQgaXQgaXMgYSByYWNlIGJldHdlZW4gdGhlIFdSSVRFIGFuZCB0aGUgUkVORVcgdG8gdXNl
IHRoZSBzYW1lIG5ld2x5IGNyZWF0ZWQgR1NTIGNvbnRleHQgdGhhdCBoYXMgbm90IGJlZW4gdXNl
ZCB5ZXQsIGFuZCBzbyBoYXMgbm8gYXNzaWduZWQgc2VydmljZSBsZXZlbCwgYW5kIHRoZSB0d28g
cmVxdWVzdHMgcmFjZSB0byBzZXQgdGhlIHNlcnZpY2UgbGV2ZWwuDQo+Pj4+Pj4+Pj4+Pj4+Pj4g
DQo+Pj4+Pj4+Pj4+Pj4+Pj4gSSBhZ3JlZSB3aXRoIEFuZHkuIEl0IG11c3QgYmUgYSB0aWdodCBy
YWNlLg0KPj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+PiBJbiBvbmUgY2FwdHVyZSBJIHNl
ZSBzb21ldGhpbmcgbGlrZSB0aGlzIGFmdGVyDQo+Pj4+Pj4+Pj4+Pj4+PiB0aGUgc2VydmVyIHJl
c3RhcnRzOg0KPj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+PiBTWU4NCj4+Pj4+Pj4+Pj4+
Pj4+IFNZTiwgQUNLDQo+Pj4+Pj4+Pj4+Pj4+PiBBQ0sNCj4+Pj4+Pj4+Pj4+Pj4+IEMgV1JJVEUN
Cj4+Pj4+Pj4+Pj4+Pj4+IEMgU0VRVUVOQ0UNCj4+Pj4+Pj4+Pj4+Pj4+IFIgV1JJVEUgLT4gQ1RY
X1BST0JMRU0NCj4+Pj4+Pj4+Pj4+Pj4+IFIgU0VRVUVOQ0UgLT4gQ1RYX1BST0JMRU0NCj4+Pj4+
Pj4+Pj4+Pj4+IEMgTlVMTCAoS1JCNV9BUF9SRVEpDQo+Pj4+Pj4+Pj4+Pj4+PiBSIE5VTEwgKEtS
QjVfQVBfUkVQKQ0KPj4+Pj4+Pj4+Pj4+Pj4gQyBXUklURQ0KPj4+Pj4+Pj4+Pj4+Pj4gQyBTRVFV
RU5DRQ0KPj4+Pj4+Pj4+Pj4+Pj4gUiBXUklURSAtPiBORlM0RVJSX1NUQUxFX1NUQVRFSUQNCj4+
Pj4+Pj4+Pj4+Pj4+IFIgU0VRVUVOQ0UgLT4gQVVUSF9GQUlMRUQNCj4+Pj4+Pj4+Pj4+Pj4+IA0K
Pj4+Pj4+Pj4+Pj4+Pj4gQW5keSdzIHRoZW9yeSBuZWF0bHkgZXhwbGFpbnMgdGhpcyBiZWhhdmlv
ci4NCj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4gSSBo
YXZlIHRyaWVkIHRvIHJlcHJvZHVjZQ0KPj4+Pj4+Pj4+Pj4+Pj4+IHlvdXIgc2NlbmFyaW8gYW5k
IGluIG15IHRlc3RzIG9mIHJlYm9vdGluZyB0aGUgc2VydmVyIGFsbCByZWNvdmVyDQo+Pj4+Pj4+
Pj4+Pj4+Pj4gY29ycmVjdGx5LiBJbiBteSBjYXNlLCBpZiBSRU5FVyB3YXMgdGhlIG9uZSBoaXR0
aW5nIHRoZSBBVVRIX0VSUiB0aGVuDQo+Pj4+Pj4+Pj4+Pj4+Pj4gdGhlIG5ldyBjb250ZXh0IGlz
IGVzdGFibGlzaGVkIGFuZCB0aGVuIFJFTkVXIHVzaW5nIGludGVncml0eSBzZXJ2aWNlDQo+Pj4+
Pj4+Pj4+Pj4+Pj4gaXMgcmV0cmllZCB3aXRoIHRoZSBuZXcgY29udGV4dCB3aGljaCBnZXRzIEVS
Ul9TVEFMRV9DTElFTlRJRCB3aGljaA0KPj4+Pj4+Pj4+Pj4+Pj4+IHRoZW4gY2xpZW50IHJlY292
ZXJzIGZyb20uIElmIGl0J3MgYW4gb3BlcmF0aW9uIChJIGhhdmUgYSBHRVRBVFRSKQ0KPj4+Pj4+
Pj4+Pj4+Pj4+IHRoYXQgZ2V0cyBBVVRIX0VSUiwgdGhlbiBpdCBnZXRzIG5ldyBjb250ZXh0IGFu
ZCBpcyByZXRyaWVkIHVzaW5nIG5vbmUNCj4+Pj4+Pj4+Pj4+Pj4+PiBzZXJ2aWNlLiBUaGVuIFJF
TkVXIGdldHMgaXRzIG93biBBVVRIX0VSUiBhcyBpdCB1c2VzIGEgZGlmZmVyZW50DQo+Pj4+Pj4+
Pj4+Pj4+Pj4gY29udGV4dCwgYSBuZXcgY29udGV4dCBpcyBnb3R0ZW4sIFJFTkVXIGlzIHJldHJp
ZWQgb3ZlciBpbnRlZ3JpdHkgYW5kDQo+Pj4+Pj4+Pj4+Pj4+Pj4gZ2V0cyBFUlJfU1RBTEVfQ0xJ
RU5USUQgd2hpY2ggaXQgcmVjb3ZlcnMgZnJvbS4NCj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+
Pj4+Pj4gSWYgb25lIG9wZXJhdGlvbiBpcyBhbGxvd2VkIHRvIGNvbXBsZXRlLCB0aGVuDQo+Pj4+
Pj4+Pj4+Pj4+PiB0aGUgb3RoZXIgd2lsbCBhbHdheXMgcmVjb2duaXplIHRoYXQgYW5vdGhlcg0K
Pj4+Pj4+Pj4+Pj4+Pj4gZnJlc2ggR1NTIGNvbnRleHQgaXMgbmVlZGVkLiBJZiB0d28gYXJlIHNl
bnQNCj4+Pj4+Pj4+Pj4+Pj4+IGF0IHRoZSBzYW1lIHRpbWUsIHRoZXkgcmFjZSBhbmQgb25lIGFs
d2F5cw0KPj4+Pj4+Pj4+Pj4+Pj4gZmFpbHMuDQo+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+
Pj4+IEhlbGVuJ3MgdGVzdCBpbmNsdWRlcyBhIHNlY29uZCBpZGxlIG1vdW50IHBvaW50DQo+Pj4+
Pj4+Pj4+Pj4+PiAoc2VjPWtyYjVpKSBhbmQgbWF5YmUgdGhhdCBpcyBuZWVkZWQgdG8gdHJpZ2dl
cg0KPj4+Pj4+Pj4+Pj4+Pj4gdGhlIHJhY2U/DQo+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+
PiBDaHVjaywgYW55IGNoYW5jZSB0byBnZXQgInJwY2RlYnVnIC1tIHJwYyBhdXRoIiBvdXRwdXQg
ZHVyaW5nIHRoZQ0KPj4+Pj4+Pj4+Pj4+PiBmYWlsdXJlIChnc3NkIG9wdGlvbmFsbHkpIChpIHJl
YWxpemUgdGhhdCBpdCBtaWdodCBhbHRlciB0aGUgdGltaW5ncw0KPj4+Pj4+Pj4+Pj4+PiBhbmQg
bm90IGhpdCB0aGUgaXNzdWUgYnV0IHdvcnRoIGEgc2hvdCk/DQo+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+
Pj4+Pj4+Pj4gSSdtIHN1cmUgdGhhdCdzIGZpbmUuIEFuIGludGVybmFsIHRlc3RlciBoaXQgdGhp
cywNCj4+Pj4+Pj4+Pj4+PiBub3QgYSBjdXN0b21lciwgc28gSSB3aWxsIGFzay4NCj4+Pj4+Pj4+
Pj4+PiANCj4+Pj4+Pj4+Pj4+PiBJIGFncmVlLCB0aG91Z2gsIHRoYXQgdGltaW5nIG1pZ2h0IGJl
IGEgcHJvYmxlbToNCj4+Pj4+Pj4+Pj4+PiB0aGVzZSBzeXN0ZW1zIGFsbCBoYXZlIHJlYWwgc2Vy
aWFsIGNvbnNvbGVzIHZpYQ0KPj4+Pj4+Pj4+Pj4+IGlMT00sIHNvIC92L2wvbSB0cmFmZmljIGRv
ZXMgYnJpbmcgZXZlcnl0aGluZyB0bw0KPj4+Pj4+Pj4+Pj4+IGEgc3RhbmRzdGlsbC4NCj4+Pj4+
Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+PiBNZWFud2hpbGUsIHdoYXQncyB5b3UncmUgb3BpbmlvbiBh
Ym91dCBBVVRIX0ZBSUxFRD8NCj4+Pj4+Pj4+Pj4+PiBTaG91bGQgdGhlIHNlcnZlciByZXR1cm4g
UlBDU0VDX0dTU19DVFhQUk9CTEVNDQo+Pj4+Pj4+Pj4+Pj4gaW4gdGhpcyBjYXNlIGluc3RlYWQ/
IElmIGl0IGRpZCwgZG8geW91IHRoaW5rDQo+Pj4+Pj4+Pj4+Pj4gdGhlIExpbnV4IGNsaWVudCB3
b3VsZCByZWNvdmVyIGJ5IGNyZWF0aW5nIGENCj4+Pj4+Pj4+Pj4+PiByZXBsYWNlbWVudCBHU1Mg
Y29udGV4dD8NCj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4gQWgsIHllcywgSSBlcXVhdGVkIEFV
VEhfRkFJTEVEIEFuZCBBVVRIX0VSUk9SIGluIG15IG1pbmQuIElmIGNsaWVudA0KPj4+Pj4+Pj4+
Pj4gcmVjZWl2ZXMgdGhlIHJlYXNvbiBhcyBBVVRIX0ZBSUxFRCBhcyBvcHBvc2UgdG8gQ1RYUFJP
QkxFTSBpdCB3aWxsDQo+Pj4+Pj4+Pj4+PiBmYWlsIHdpdGggRUlPIGVycm9yIGFuZCB3aWxsIG5v
dCB0cnkgdG8gY3JlYXRlIGEgbmV3IEdTUyBjb250ZXh0LiBTbw0KPj4+Pj4+Pj4+Pj4geWVzLCBJ
IGJlbGlldmUgaXQgd291bGQgaGVscCBpZiB0aGUgc2VydmVyIHJldHVybnMgYW55IG9mIHRoZQ0K
Pj4+Pj4+Pj4+Pj4gZm9sbG93aW5nIGVycm9yczoNCj4+Pj4+Pj4+Pj4+ICAgICAgICAgICAgY2Fz
ZSBSUENfQVVUSF9SRUpFQ1RFRENSRUQ6DQo+Pj4+Pj4+Pj4+PiAgICAgICAgICAgIGNhc2UgUlBD
X0FVVEhfUkVKRUNURURWRVJGOg0KPj4+Pj4+Pj4+Pj4gICAgICAgICAgICBjYXNlIFJQQ1NFQ19H
U1NfQ1JFRFBST0JMRU06DQo+Pj4+Pj4+Pj4+PiAgICAgICAgICAgIGNhc2UgUlBDU0VDX0dTU19D
VFhQUk9CTEVNOg0KPj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+PiB0aGVuIHRoZSBjbGllbnQgd2ls
bCByZWNyZWF0ZSB0aGUgY29udGV4dC4NCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+IEFsc28gaW4g
bXkgdGVzdGluZywgSSBjYW4gc2VlIHRoYXQgY3JlZGVudGlhbCBjYWNoZSBpcyBwZXIgZ3NzIGZs
YXZvci4NCj4+Pj4+Pj4+Pj4gSnVzdCB0byBjaGVjaywgd2hhdCBrZXJuZWwgdmVyc2lvbiBpcyB0
aGlzIHByb2JsZW0gZW5jb3VudGVyZWQgb24gKEkNCj4+Pj4+Pj4+Pj4ga25vdyB5b3Ugc2FpZCB1
cHN0cmVhbSkgYnV0IEkganVzdCB3YW50IHRvIGRvdWJsZSBjaGVjayBzbyB0aGF0IEkgY2FuDQo+
Pj4+Pj4+Pj4+IGxvb2sgYXQgdGhlIGNvcnJlY3Qgc291cmNlIGNvZGUuDQo+Pj4+Pj4+Pj4gDQo+
Pj4+Pj4+Pj4gdjQuMS4xMiAoc3RhYmxlKSBJIHRoaW5rLg0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBB
bHNvLCBjYW4geW91IHNoYXJlIHRoZSBuZXR3b3JrIHRyYWNlPw0KPj4+Pj4+PiANCj4+Pj4+Pj4g
SGkgQ2h1Y2ssDQo+Pj4+Pj4+IA0KPj4+Pj4+PiBJIHdhcyBmaW5hbGx5IGFibGUgdG8gcmVwcm9k
dWNlIHRoZSBjb25kaXRpb24geW91IHdlcmUgc2VlaW5nIChpLmUuLA0KPj4+Pj4+PiB0aGUgdXNl
IG9mIHRoZSBzYW1lIGNvbnRleHQgZm9yIGRpZmZlcmVudCBnc3Mgc2VydmljZXMpLg0KPj4+Pj4+
PiANCj4+Pj4+Pj4gSSBlbmFibGVkIHJwY2RlYnVnIHJwYyBhdXRoIGFuZCBJIGNhbiBzZWUgdGhh
dCB0aGUgMm5kIHJlcXVlc3QgZW5kcyB1cA0KPj4+Pj4+PiBmaW5kaW5nIGEgZ3NzX3VwY2FsbCBt
ZXNzYWdlIGJlY2F1c2UgaXQncyBqdXN0IG1hdGNoZWQgYnkgdGhlIHVpZC4NCj4+Pj4+Pj4gVGhl
cmUgaXMgZXZlbiBhIGNvbW1lbnQgaW4gYXV0aF9nc3MvYXV0aF9nc3MuYyBpbiBnc3NfYWRkX21z
ZygpIHNheWluZw0KPj4+Pj4+PiB0aGF0IGlmIHRoZXJlIGlzIHVwY2FsbCBmb3IgYW4gdWlkIHRo
ZW4gaXQgd29uJ3QgYWRkIGFub3RoZXIgdXBjYWxsLg0KPj4+Pj4+PiBTbyBJIHRoaW5rIHRoZSBk
ZWNpc2lvbiBpcyBtYWRlIHJpZ2h0IHRoZXJlIHRvIHNoYXJlIHRoZSBzYW1lIGNvbnRleHQNCj4+
Pj4+Pj4gbm8gbWF0dGVyIHRoZSBnc3Mgc2VydmljZS4NCj4+Pj4+PiANCj4+Pj4+PiBJZiBJIHVu
ZGVyc3RhbmQgY29ycmVjdGx5LCB0aGF0J3MganVzdCB3aGF0IEFuZHkgcHJlZGljdGVkLg0KPj4+
Pj4+IA0KPj4+Pj4+IFRoYXQgY2hlY2sgbmVlZHMgdG8gYmUgY2hhbmdlZCB0byBhbGxvdyBhbm90
aGVyIHVwY2FsbCB0byBiZQ0KPj4+Pj4+IHF1ZXVlZCBpZiB0aGUgVUlEIG1hdGNoZXMgYnV0IHRo
ZSBHU1Mgc2VydmljZSBkb2VzIG5vdC4NCj4+Pj4+IA0KPj4+Pj4gWW91IHNob3VsZCBiZSBhYmxl
IHRvIHVzZSB0aGUgc2FtZSBjb250ZXh0IHdpdGggZGlmZmVyZW50IHNlcnZpY2VzLg0KPj4+Pj4g
DQo+Pj4+PiBBcG9sb2dpZXMsIEkgaGF2ZW4ndCBjYXVnaHQgdXAgd2l0aCB0aGUgd2hvbGUgZGlz
Y3Vzc2lvbiBhYm92ZSwgdGhpcyBvbmUNCj4+Pj4+IHBvaW50IGp1c3QganVtcGVkIG91dCBhdCBt
ZS4gIElmIHlvdSdyZSB0cnlpbmcgdG8gcmVxdWVzdCBhIHdob2xlIG5ldw0KPj4+Pj4gZ3NzIGNv
bnRleHQganVzdCBzbyB5b3UgY2FuIHVzZSwgZS5nLiwgaW50ZWdyaXR5IGluc3RlYWQgb2YgcHJp
dmFjeSwNCj4+Pj4+IHRoZW4gc29tZXRoaW5nJ3Mgd3JvbmcuDQo+Pj4+IA0KPj4+PiBUaGUgY2xp
ZW50IGNvZGUgaGFzIHNlcGFyYXRlIGdzc19jcmVkIGNhY2hlcyAoYW5kIHNvIHNlcGFyYXRlIGdz
c19jb250ZXjigJlzKSBwZXIgZ3NzX2F1dGgsIHdoaWNoIGlzIHBlciBzZXJ2aWNlLiBBRkFJSyB0
aGUgY2xpZW50IGhhcyBhbHdheXMgb2J0YWluZWQgYSBzZXBhcmF0ZSBjb250ZXh0IHBlciBzZXJ2
aWNlIHBlciBzZXJ2ZXIuIFdoaWxlIHdlIGNhbiB1c2UgdGhlIHNhbWUgZ3NzIGNvbnRleHQgd2l0
aCBkaWZmZXJlbnQgc2VydmljZXMsIHRoYXQgaXMgbm90IHRoZSBkZXNpZ24gY2hvaWNlLg0KPj4+
IA0KPj4+IEdpdmVuIHRoZSBjdXJyZW50IGNvZGUsIEknZCBzYXkgdGhhdCBpdCdzIG5vdCBjbGVh
ciB3aGF0IHRoZSBkZXNpZ24NCj4+PiBjaG9pY2UgaXMuIFRoZSB1cGNhbGwgY29kZSBzdGF0ZXMg
dGhhdCBpdCB3aWxsIG5vdCBkbyBhbm90aGVyIHVwY2FsbA0KPj4+IGZvciBhIGdpdmVuIFVJRCBp
ZiBhbm90aGVyIHVwY2FsbCBpcyBhbHJlYWR5IG1hZGUuIFNvIGl0J3MgcHVyZWx5DQo+Pj4gdGlt
aW5nIGx1Y2sgdGhhdCB3ZSBoYXZlIHR3byBkaWZmZXJlbnQgY29udGV4dHMgZm9yIHRoZSBtb3Vu
dCB3aGVuDQo+Pj4gc2VjPWtyYjUgYW5kIHRoZSBkZWZhdWx0IGZvciBsZWFzZSBvcGVyYXRpb25z
IGlzICJpbnRlZ3JpdHkiLiBBbmQgd2UNCj4+PiBkb24ndCBwYXNzIHRoZSBHU1Mgc2VydmljZSBi
ZXR3ZWVuIHRoZSBrZXJuZWwgYW5kIGdzc2QgYW5kIHRodXMgbm8gd2F5DQo+Pj4gZm9yIHVzIHRv
IHRpZSB0aGUgdXBjYWxsIHRvIHRoZSBzZXJ2aWNlLg0KPj4gDQo+PiBXZWxsLCBJIHdyb3RlIHNv
bWUgb2YgdGhlIHVwY2FsbCBjb2RlLCBhbmQgSSB0aG91Z2ggKGluY29ycmVjdGx5LCBJDQo+PiBn
dWVzcykgdGhhdCBjbGllbnRzIGNvdWxkIHJldXNlIHRoZSBzYW1lIGNvbnRleHQgd2l0aCBtdWx0
aXBsZSBzZXJ2aWNlcywNCj4+IHNvIHRoZSBjb25mdXNpb24gbWF5IGp1c3QgYmUgbXkgZmF1bHQu
ICBTbyB5b3UgdGhpbmsgdGhlIG9ubHkgZml4IGhlcmUNCj4+IGlzIHRvIGtleSB0aGUgdXBjYWxs
cw0KPiANCj4gTm8gbmVlZCB0byBjaGFuZ2UgdGhlIHVwY2FsbCwganVzdCBkb27igJkgdCBsZXQg
dGhlIDJuZCByZXF1ZXN0IHVzZSB0aGUgbmV3IGRvd25jYWxsIGdzcyBjb250ZXh0IHVubGVzcyB0
aGUgc2VydmljZSBhc3NpZ25lZCB0byBpdCBtYXRjaGVzLiBJIHdvdWxkIHRoaW5rIHlvdSBjb3Vs
ZCBzZXQgdGhlIHNlcnZpY2UgaW1tZWRpYXRlbHkgdXBvbiByZWNlaXZpbmcgdGhlIG5ldyBnc3Mg
Y29udGV4dCBkb3duIGNhbGwgYW5kIHRoZW4gY2hlY2sgaXQgcHJpb3IgdG8gdGhlIDJuZCByZXF1
ZXN0IHVzZXMgaXQuDQo+IA0KDQpodHRwczovL3Rvb2xzLmlldGYub3JnL2h0bWwvcmZjMjIwMyNz
ZWN0aW9uLTUuMi4yIEluIGEgY3JlYXRpb24gcmVxdWVzdCwgdGhlIHNlcV9udW0gYW5kIHNlcnZp
Y2UgZmllbGRzIGFyZSB1bmRlZmluZWQgYW5kIGJvdGggbXVzdCBiZSBpZ25vcmVkIGJ5IHRoZSBz
ZXJ2ZXIuDQoNCklPVzogdGhlIHNlcnZlciBjYW7igJl0IGxvY2sgdGhlIHNlcnZpY2UgdW50aWwg
dGhlIGRhdGEgZXhjaGFuZ2UgcGhhc2UgKGh0dHBzOi8vdG9vbHMuaWV0Zi5vcmcvaHRtbC9yZmMy
MjAzI3NlY3Rpb24tNS4zKS4NCg0KQ2hlZXJzDQogIFRyb25k


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Problem re-establishing GSS contexts after a server reboot
  2016-08-03 20:33                                 ` Trond Myklebust
@ 2016-08-03 21:12                                   ` Adamson, Andy
  0 siblings, 0 replies; 25+ messages in thread
From: Adamson, Andy @ 2016-08-03 21:12 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Adamson, Andy, Fields Bruce James, Kornievskaia Olga,
	Lever Chuck, List Linux NFS Mailing

DQo+IE9uIEF1ZyAzLCAyMDE2LCBhdCA0OjMzIFBNLCBUcm9uZCBNeWtsZWJ1c3QgPHRyb25kbXlA
cHJpbWFyeWRhdGEuY29tPiB3cm90ZToNCj4gDQo+PiANCj4+IE9uIEF1ZyAzLCAyMDE2LCBhdCAx
NjoxOCwgQWRhbXNvbiwgQW5keSA8V2lsbGlhbS5BZGFtc29uQG5ldGFwcC5jb20+IHdyb3RlOg0K
Pj4gDQo+PiANCj4+PiBPbiBBdWcgMywgMjAxNiwgYXQgNDowNiBQTSwgSi4gQnJ1Y2UgRmllbGRz
IDxiZmllbGRzQGZpZWxkc2VzLm9yZz4gd3JvdGU6DQo+Pj4gDQo+Pj4gT24gV2VkLCBBdWcgMDMs
IDIwMTYgYXQgMDM6NTY6MzFQTSAtMDQwMCwgT2xnYSBLb3JuaWV2c2thaWEgd3JvdGU6DQo+Pj4+
IE9uIFdlZCwgQXVnIDMsIDIwMTYgYXQgMjo1MyBQTSwgQWRhbXNvbiwgQW5keQ0KPj4+PiA8V2ls
bGlhbS5BZGFtc29uQG5ldGFwcC5jb20+IHdyb3RlOg0KPj4+Pj4gDQo+Pj4+Pj4gT24gQXVnIDIs
IDIwMTYsIGF0IDI6MDYgUE0sIEouIEJydWNlIEZpZWxkcyA8YmZpZWxkc0BmaWVsZHNlcy5vcmc+
IHdyb3RlOg0KPj4+Pj4+IA0KPj4+Pj4+IE9uIEZyaSwgSnVsIDI5LCAyMDE2IGF0IDEyOjM4OjM0
UE0gLTA0MDAsIENodWNrIExldmVyIHdyb3RlOg0KPj4+Pj4+PiANCj4+Pj4+Pj4+IE9uIEp1bCAy
OSwgMjAxNiwgYXQgMTI6MjcgUE0sIE9sZ2EgS29ybmlldnNrYWlhIDxhZ2xvQHVtaWNoLmVkdT4g
d3JvdGU6DQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IE9uIE1vbiwgSnVsIDI1LCAyMDE2IGF0IDI6MTgg
UE0sIE9sZ2EgS29ybmlldnNrYWlhIDxhZ2xvQHVtaWNoLmVkdT4gd3JvdGU6DQo+Pj4+Pj4+Pj4g
T24gVGh1LCBKdWwgMjEsIDIwMTYgYXQgNTozMiBQTSwgQ2h1Y2sgTGV2ZXIgPGNodWNrLmxldmVy
QG9yYWNsZS5jb20+IHdyb3RlOg0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+
PiBPbiBKdWwgMjEsIDIwMTYsIGF0IDEwOjQ2IFBNLCBPbGdhIEtvcm5pZXZza2FpYSA8YWdsb0B1
bWljaC5lZHU+IHdyb3RlOg0KPj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4gT24gVGh1LCBKdWwg
MjEsIDIwMTYgYXQgMzo1NCBQTSwgT2xnYSBLb3JuaWV2c2thaWEgPGFnbG9AdW1pY2guZWR1PiB3
cm90ZToNCj4+Pj4+Pj4+Pj4+Pj4gT24gVGh1LCBKdWwgMjEsIDIwMTYgYXQgMTo1NiBQTSwgQ2h1
Y2sgTGV2ZXIgPGNodWNrLmxldmVyQG9yYWNsZS5jb20+IHdyb3RlOg0KPj4+Pj4+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+Pj4+Pj4+IE9uIEp1bCAyMSwgMjAxNiwgYXQgNjowNCBQTSwgT2xnYSBLb3JuaWV2
c2thaWEgPGFnbG9AdW1pY2guZWR1PiB3cm90ZToNCj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+
Pj4+Pj4gT24gVGh1LCBKdWwgMjEsIDIwMTYgYXQgMjo1NSBBTSwgQ2h1Y2sgTGV2ZXIgPGNodWNr
LmxldmVyQG9yYWNsZS5jb20+IHdyb3RlOg0KPj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+
Pj4+PiBPbiBKdWwgMjAsIDIwMTYsIGF0IDY6NTYgUE0sIE9sZ2EgS29ybmlldnNrYWlhIDxhZ2xv
QHVtaWNoLmVkdT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+Pj4+PiBP
biBXZWQsIEp1bCAyMCwgMjAxNiBhdCA1OjE0IEFNLCBBZGFtc29uLCBBbmR5DQo+Pj4+Pj4+Pj4+
Pj4+Pj4+IDxXaWxsaWFtLkFkYW1zb25AbmV0YXBwLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+Pj4+
Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiBPbiBKdWwgMTksIDIwMTYsIGF0IDEwOjUxIEFNLCBD
aHVjayBMZXZlciA8Y2h1Y2subGV2ZXJAb3JhY2xlLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+Pj4+
Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gSGkgQW5keS0NCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiBUaGFua3MgZm9yIHRha2luZyB0aGUgdGltZSB0byBkaXNjdXNz
IHRoaXMgd2l0aCBtZS4gSSd2ZQ0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IGNvcGllZCBsaW51eC1uZnMg
dG8gbWFrZSB0aGlzIGUtbWFpbCBhbHNvIGFuIHVwc3RyZWFtIGJ1Zw0KPj4+Pj4+Pj4+Pj4+Pj4+
Pj4+IHJlcG9ydC4NCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiBBcyB3
ZSBzYXcgaW4gdGhlIG5ldHdvcmsgY2FwdHVyZSwgcmVjb3Zlcnkgb2YgR1NTIGNvbnRleHRzDQo+
Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gYWZ0ZXIgYSBzZXJ2ZXIgcmVib290IGZhaWxzIGluIGNlcnRhaW4g
Y2FzZXMgd2l0aCBORlN2NC4wDQo+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gYW5kIE5GU3Y0LjEgbW91bnQg
cG9pbnRzLg0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IFRoZSByZXBy
b2R1Y2VyIGlzIGEgc2ltcGxlIHByb2dyYW0gdGhhdCBnZW5lcmF0ZXMgb25lIE5GUw0KPj4+Pj4+
Pj4+Pj4+Pj4+Pj4+IFdSSVRFIHBlcmlvZGljYWxseSwgcnVuIHdoaWxlIHRoZSBzZXJ2ZXIgcmVw
ZWF0ZWRseSByZWJvb3RzDQo+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gKG9yIG9uZSBjbHVzdGVyIGhlYWQg
ZmFpbHMgb3ZlciB0byB0aGUgb3RoZXIgYW5kIGJhY2spLiBUaGUNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+
PiBnb2FsIG9mIHRoZSByZXByb2R1Y2VyIGlzIHRvIGlkZW50aWZ5IHByb2JsZW1zIHdpdGggc3Rh
dGUNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiByZWNvdmVyeSB3aXRob3V0IGEgbG90IG9mIG90aGVyIEkv
TyBnb2luZyBvbiB0byBjbHV0dGVyIHVwDQo+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gdGhlIG5ldHdvcmsg
Y2FwdHVyZS4NCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiBJbiB0aGUg
ZmFpbGluZyBjYXNlLCBzZWM9a3JiNSBpcyBzcGVjaWZpZWQgb24gdGhlIG1vdW50DQo+Pj4+Pj4+
Pj4+Pj4+Pj4+Pj4gcG9pbnQsIGFuZCB0aGUgcmVwcm9kdWNlciBpcyBydW4gYXMgcm9vdC4gV2Un
dmUgZm91bmQgdGhpcw0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IGNvbWJpbmF0aW9uIGZhaWxzIHdpdGgg
Ym90aCBORlN2NC4wIGFuZCBORlN2NC4xLg0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+
Pj4+Pj4+Pj4+IEF0IG1vdW50IHRpbWUsIHRoZSBjbGllbnQgZXN0YWJsaXNoZXMgYSBHU1MgY29u
dGV4dCBmb3INCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiBsZWFzZSBtYW5hZ2VtZW50IG9wZXJhdGlvbnMs
IHdoaWNoIGlzIGJvdW5kIHRvIHRoZSBjbGllbnQncw0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IE5GUyBz
ZXJ2aWNlIHByaW5jaXBhbCBhbmQgdXNlcyBHU1Mgc2VydmljZSAiaW50ZWdyaXR5LiINCj4+Pj4+
Pj4+Pj4+Pj4+Pj4+PiBDYWxsIHRoaXMgR1NTIGNvbnRleHQgMS4NCj4+Pj4+Pj4+Pj4+Pj4+Pj4+
PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiBXaGVuIHRoZSByZXByb2R1Y2VyIHN0YXJ0cywgYSBzZWNv
bmQgR1NTIGNvbnRleHQgaXMNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiBlc3RhYmxpc2hlZCBmb3IgTkZT
IG9wZXJhdGlvbnMgYXNzb2NpYXRlZCB3aXRoIHRoYXQgdXNlci4NCj4+Pj4+Pj4+Pj4+Pj4+Pj4+
PiBTaW5jZSB0aGUgcmVwcm9kdWNlciBpcyBydW5uaW5nIGFzIHJvb3QsIHRoaXMgY29udGV4dCBp
cw0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IGFsc28gYm91bmQgdG8gdGhlIGNsaWVudCdzIE5GUyBzZXJ2
aWNlIHByaW5jaXBhbCwgYnV0IGl0DQo+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gdXNlcyB0aGUgR1NTIHNl
cnZpY2UgIm5vbmUiIChyZWZsZWN0aW5nIHRoZSBleHBsaWNpdA0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+
IHJlcXVlc3QgZm9yICJzZWM9a3JiNSIpLiBDYWxsIHRoaXMgR1NTIGNvbnRleHQgMi4NCj4+Pj4+
Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiBBZnRlciB0aGUgc2VydmVyIHJlYm9v
dHMsIHRoZSBjbGllbnQgcmUtZXN0YWJsaXNoZXMgYSBUQ1ANCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiBj
b25uZWN0aW9uIHdpdGggdGhlIHNlcnZlciwgYW5kIHBlcmZvcm1zIGEgUkVORVcNCj4+Pj4+Pj4+
Pj4+Pj4+Pj4+PiBvcGVyYXRpb24gdXNpbmcgY29udGV4dCAxLiBUaGFua3MgdG8gdGhlIHNlcnZl
ciByZWJvb3QsDQo+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gY29udGV4dHMgMSBhbmQgMiBhcmUgbm93IHN0
YWxlLiBUaGUgc2VydmVyIHRodXMgcmVqZWN0cw0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IHRoZSBSUEMg
d2l0aCBSUENTRUNfR1NTX0NUWFBST0JMRU0uDQo+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+
Pj4+Pj4+Pj4+Pj4gVGhlIGNsaWVudCBwZXJmb3JtcyBhIEdTU19JTklUX1NFQ19DT05URVhUIHZp
YSBhbiBORlN2NA0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IE5VTEwgb3BlcmF0aW9uLiBDYWxsIHRoaXMg
R1NTIGNvbnRleHQgMy4NCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiBJ
bnRlcmVzdGluZ2x5LCB0aGUgY2xpZW50IGRvZXMgbm90IHJlc2VuZCB0aGUgUkVORVcNCj4+Pj4+
Pj4+Pj4+Pj4+Pj4+PiBvcGVyYXRpb24gYXQgdGhpcyBwb2ludCAoaWYgaXQgZGlkLCB3ZSB3b3Vs
ZG4ndCBzZWUgdGhpcw0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IHByb2JsZW0gYXQgYWxsKS4NCj4+Pj4+
Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiBUaGUgY2xpZW50IHRoZW4gYXR0ZW1w
dHMgdG8gcmVzdW1lIHRoZSByZXByb2R1Y2VyIHdvcmtsb2FkLg0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+
IEl0IHNlbmRzIGFuIE5GU3Y0IFdSSVRFIG9wZXJhdGlvbiwgdXNpbmcgdGhlIGZpcnN0IGF2YWls
YWJsZQ0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IEdTUyBjb250ZXh0IGluIFVJRCAwJ3MgY3JlZGVudGlh
bCBjYWNoZSwgd2hpY2ggaXMgY29udGV4dCAzLA0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IGFscmVhZHkg
Ym91bmQgdG8gdGhlIGNsaWVudCdzIE5GUyBzZXJ2aWNlIHByaW5jaXBhbC4gQnV0IEdTUw0KPj4+
Pj4+Pj4+Pj4+Pj4+Pj4+IHNlcnZpY2UgIm5vbmUiIGlzIHVzZWQgZm9yIHRoaXMgb3BlcmF0aW9u
LCBzaW5jZSBpdCBpcyBvbg0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IGJlaGFsZiBvZiB0aGUgbW91bnQg
d2hlcmUgc2VjPWtyYjUgd2FzIHNwZWNpZmllZC4NCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+
Pj4+Pj4+Pj4+Pj4+PiBUaGUgUlBDIGlzIGFjY2VwdGVkLCBidXQgdGhlIHNlcnZlciByZXBvcnRz
DQo+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gTkZTNEVSUl9TVEFMRV9TVEFURUlELCBzaW5jZSBpdCBoYXMg
cmVjZW50bHkgcmVib290ZWQuDQo+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4+
Pj4gVGhlIGNsaWVudCByZXNwb25kcyBieSBhdHRlbXB0aW5nIHN0YXRlIHJlY292ZXJ5LiBUaGUN
Cj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiBmaXJzdCBvcGVyYXRpb24gaXQgdHJpZXMgaXMgYW5vdGhlciBS
RU5FVy4gU2luY2UgdGhpcyBpcw0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IGEgbGVhc2UgbWFuYWdlbWVu
dCBvcGVyYXRpb24sIHRoZSBjbGllbnQgbG9va3MgaW4gVUlEIDAncw0KPj4+Pj4+Pj4+Pj4+Pj4+
Pj4+IGNyZWRlbnRpYWwgY2FjaGUgYWdhaW4gYW5kIGZpbmRzIHRoZSByZWNlbnRseSBlc3RhYmxp
c2hlZA0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IGNvbnRleHQgMy4gSXQgdHJpZXMgdGhlIFJFTkVXIG9w
ZXJhdGlvbiB1c2luZyBHU1MgY29udGV4dA0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IDMgd2l0aCBHU1Mg
c2VydmljZSAiaW50ZWdyaXR5LiINCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+
Pj4+PiBUaGUgc2VydmVyIHJlamVjdHMgdGhlIFJFTkVXIFJQQyB3aXRoIEFVVEhfRkFJTEVELCBh
bmQNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiB0aGUgY2xpZW50IHJlcG9ydHMgdGhhdCAiY2hlY2sgbGVh
c2UgZmFpbGVkIiBhbmQNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiB0ZXJtaW5hdGVzIHN0YXRlIHJlY292
ZXJ5Lg0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IFRoZSBjbGllbnQg
cmUtZHJpdmVzIHRoZSBXUklURSBvcGVyYXRpb24gd2l0aCB0aGUgc3RhbGUNCj4+Pj4+Pj4+Pj4+
Pj4+Pj4+PiBzdGF0ZWlkIHdpdGggcHJlZGljdGFibGUgcmVzdWx0cy4gVGhlIGNsaWVudCBhZ2Fp
biB0cmllcw0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IHRvIHJlY292ZXIgc3RhdGUgYnkgc2VuZGluZyBh
IFJFTkVXLCBhbmQgc3RpbGwgdXNlcyB0aGUNCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiBzYW1lIEdTUyBj
b250ZXh0IDMgd2l0aCBzZXJ2aWNlICJpbnRlZ3JpdHkiIGFuZCBnZXRzIHRoZQ0KPj4+Pj4+Pj4+
Pj4+Pj4+Pj4+IHNhbWUgcmVzdWx0LiBBIChwZXJoYXBzIHNsb3ctbW90aW9uKSBTVEFMRV9TVEFU
RUlEIGxvb3ANCj4+Pj4+Pj4+Pj4+Pj4+Pj4+PiBlbnN1ZXMsIGFuZCB0aGUgY2xpZW50IG1vdW50
IHBvaW50IGlzIGRlYWRsb2NrZWQuDQo+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+
Pj4+Pj4gWW91ciBhbmFseXNpcyB3YXMgdGhhdCBiZWNhdXNlIHRoZSByZXByb2R1Y2VyIGlzIHJ1
biBhcw0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IHJvb3QsIGJvdGggdGhlIHJlcHJvZHVjZXIncyBJL08g
b3BlcmF0aW9ucywgYW5kIGxlYXNlDQo+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gbWFuYWdlbWVudCBvcGVy
YXRpb25zLCBhdHRlbXB0IHRvIHVzZSB0aGUgc2FtZSBHU1MgY29udGV4dA0KPj4+Pj4+Pj4+Pj4+
Pj4+Pj4+IGluIFVJRCAwJ3MgY3JlZGVudGlhbCBjYWNoZSwgYnV0IGVhY2ggdXNlcyBkaWZmZXJl
bnQgR1NTDQo+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gc2VydmljZXMuDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+Pj4+Pj4+Pj4+IEFzIFJGQzIyMDMgc3RhdGVzLCAiSW4gYSBjcmVhdGlvbiByZXF1
ZXN0LCB0aGUgc2VxX251bSBhbmQgc2VydmljZSBmaWVsZHMgYXJlIHVuZGVmaW5lZCBhbmQgYm90
aCBtdXN0IGJlIGlnbm9yZWQgYnkgdGhlIHNlcnZlcuKAnQ0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4gU28g
YSBjb250ZXh0IGNyZWF0aW9uIHJlcXVlc3Qgd2hpbGUga2lja2VkIG9mZiBieSBhbiBvcGVyYXRp
b24gd2l0aCBhIHNlcnZpY2UgYXR0YWNoZWQgKGUuZy4gV1JJVEUgdXNlcyBycGNfZ3NzX3N2Y19u
b25lIGFuZCBSRU5FVyB1c2VzIHJwY19nc3Nfc3ZjX2ludGVncml0eSksIGNhbiBiZSB1c2VkIGJ5
IGVpdGhlciBzZXJ2aWNlIGxldmVsLg0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4gQUZBSUNTIGEgc2luZ2xl
IEdTUyBjb250ZXh0IGNvdWxkIGluIHRoZW9yeSBiZSB1c2VkIGZvciBhbGwgc2VydmljZSBsZXZl
bHMsIGJ1dCBpbiBwcmFjdGljZSwgR1NTIGNvbnRleHRzIGFyZSByZXN0cmljdGVkIHRvIGEgc2Vy
dmljZSBsZXZlbCAoYnkgY2xpZW50PyBieSBzZXJ2ZXI/ICkgb25jZSB0aGV5IGFyZSB1c2VkLg0K
Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4+
PiBUaGUga2V5IGlzc3VlIHNlZW1zIHRvIGJlIHdoeSwgd2hlbiB0aGUgbW91bnQNCj4+Pj4+Pj4+
Pj4+Pj4+Pj4+PiBpcyBmaXJzdCBlc3RhYmxpc2hlZCwgdGhlIGNsaWVudCBpcyBjb3JyZWN0bHkg
YWJsZSB0bw0KPj4+Pj4+Pj4+Pj4+Pj4+Pj4+IGVzdGFibGlzaCB0d28gc2VwYXJhdGUgR1NTIGNv
bnRleHRzIGZvciBVSUQgMDsgYnV0IGFmdGVyDQo+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4gYSBzZXJ2ZXIg
cmVib290LCB0aGUgY2xpZW50IGF0dGVtcHRzIHRvIHVzZSB0aGUgc2FtZSBHU1MNCj4+Pj4+Pj4+
Pj4+Pj4+Pj4+PiBjb250ZXh0IHdpdGggdHdvIGRpZmZlcmVudCBHU1Mgc2VydmljZXMuDQo+Pj4+
Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+Pj4+IEkgc3BlY3VsYXRlIHRoYXQgaXQgaXMg
YSByYWNlIGJldHdlZW4gdGhlIFdSSVRFIGFuZCB0aGUgUkVORVcgdG8gdXNlIHRoZSBzYW1lIG5l
d2x5IGNyZWF0ZWQgR1NTIGNvbnRleHQgdGhhdCBoYXMgbm90IGJlZW4gdXNlZCB5ZXQsIGFuZCBz
byBoYXMgbm8gYXNzaWduZWQgc2VydmljZSBsZXZlbCwgYW5kIHRoZSB0d28gcmVxdWVzdHMgcmFj
ZSB0byBzZXQgdGhlIHNlcnZpY2UgbGV2ZWwuDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+
Pj4+Pj4+PiBJIGFncmVlIHdpdGggQW5keS4gSXQgbXVzdCBiZSBhIHRpZ2h0IHJhY2UuDQo+Pj4+
Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4gSW4gb25lIGNhcHR1cmUgSSBzZWUgc29tZXRo
aW5nIGxpa2UgdGhpcyBhZnRlcg0KPj4+Pj4+Pj4+Pj4+Pj4+IHRoZSBzZXJ2ZXIgcmVzdGFydHM6
DQo+Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4gU1lODQo+Pj4+Pj4+Pj4+Pj4+Pj4g
U1lOLCBBQ0sNCj4+Pj4+Pj4+Pj4+Pj4+PiBBQ0sNCj4+Pj4+Pj4+Pj4+Pj4+PiBDIFdSSVRFDQo+
Pj4+Pj4+Pj4+Pj4+Pj4gQyBTRVFVRU5DRQ0KPj4+Pj4+Pj4+Pj4+Pj4+IFIgV1JJVEUgLT4gQ1RY
X1BST0JMRU0NCj4+Pj4+Pj4+Pj4+Pj4+PiBSIFNFUVVFTkNFIC0+IENUWF9QUk9CTEVNDQo+Pj4+
Pj4+Pj4+Pj4+Pj4gQyBOVUxMIChLUkI1X0FQX1JFUSkNCj4+Pj4+Pj4+Pj4+Pj4+PiBSIE5VTEwg
KEtSQjVfQVBfUkVQKQ0KPj4+Pj4+Pj4+Pj4+Pj4+IEMgV1JJVEUNCj4+Pj4+Pj4+Pj4+Pj4+PiBD
IFNFUVVFTkNFDQo+Pj4+Pj4+Pj4+Pj4+Pj4gUiBXUklURSAtPiBORlM0RVJSX1NUQUxFX1NUQVRF
SUQNCj4+Pj4+Pj4+Pj4+Pj4+PiBSIFNFUVVFTkNFIC0+IEFVVEhfRkFJTEVEDQo+Pj4+Pj4+Pj4+
Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4gQW5keSdzIHRoZW9yeSBuZWF0bHkgZXhwbGFpbnMgdGhp
cyBiZWhhdmlvci4NCj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+
Pj4+Pj4+Pj4gSSBoYXZlIHRyaWVkIHRvIHJlcHJvZHVjZQ0KPj4+Pj4+Pj4+Pj4+Pj4+PiB5b3Vy
IHNjZW5hcmlvIGFuZCBpbiBteSB0ZXN0cyBvZiByZWJvb3RpbmcgdGhlIHNlcnZlciBhbGwgcmVj
b3Zlcg0KPj4+Pj4+Pj4+Pj4+Pj4+PiBjb3JyZWN0bHkuIEluIG15IGNhc2UsIGlmIFJFTkVXIHdh
cyB0aGUgb25lIGhpdHRpbmcgdGhlIEFVVEhfRVJSIHRoZW4NCj4+Pj4+Pj4+Pj4+Pj4+Pj4gdGhl
IG5ldyBjb250ZXh0IGlzIGVzdGFibGlzaGVkIGFuZCB0aGVuIFJFTkVXIHVzaW5nIGludGVncml0
eSBzZXJ2aWNlDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IGlzIHJldHJpZWQgd2l0aCB0aGUgbmV3IGNvbnRl
eHQgd2hpY2ggZ2V0cyBFUlJfU1RBTEVfQ0xJRU5USUQgd2hpY2gNCj4+Pj4+Pj4+Pj4+Pj4+Pj4g
dGhlbiBjbGllbnQgcmVjb3ZlcnMgZnJvbS4gSWYgaXQncyBhbiBvcGVyYXRpb24gKEkgaGF2ZSBh
IEdFVEFUVFIpDQo+Pj4+Pj4+Pj4+Pj4+Pj4+IHRoYXQgZ2V0cyBBVVRIX0VSUiwgdGhlbiBpdCBn
ZXRzIG5ldyBjb250ZXh0IGFuZCBpcyByZXRyaWVkIHVzaW5nIG5vbmUNCj4+Pj4+Pj4+Pj4+Pj4+
Pj4gc2VydmljZS4gVGhlbiBSRU5FVyBnZXRzIGl0cyBvd24gQVVUSF9FUlIgYXMgaXQgdXNlcyBh
IGRpZmZlcmVudA0KPj4+Pj4+Pj4+Pj4+Pj4+PiBjb250ZXh0LCBhIG5ldyBjb250ZXh0IGlzIGdv
dHRlbiwgUkVORVcgaXMgcmV0cmllZCBvdmVyIGludGVncml0eSBhbmQNCj4+Pj4+Pj4+Pj4+Pj4+
Pj4gZ2V0cyBFUlJfU1RBTEVfQ0xJRU5USUQgd2hpY2ggaXQgcmVjb3ZlcnMgZnJvbS4NCj4+Pj4+
Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4+PiBJZiBvbmUgb3BlcmF0aW9uIGlzIGFsbG93ZWQg
dG8gY29tcGxldGUsIHRoZW4NCj4+Pj4+Pj4+Pj4+Pj4+PiB0aGUgb3RoZXIgd2lsbCBhbHdheXMg
cmVjb2duaXplIHRoYXQgYW5vdGhlcg0KPj4+Pj4+Pj4+Pj4+Pj4+IGZyZXNoIEdTUyBjb250ZXh0
IGlzIG5lZWRlZC4gSWYgdHdvIGFyZSBzZW50DQo+Pj4+Pj4+Pj4+Pj4+Pj4gYXQgdGhlIHNhbWUg
dGltZSwgdGhleSByYWNlIGFuZCBvbmUgYWx3YXlzDQo+Pj4+Pj4+Pj4+Pj4+Pj4gZmFpbHMuDQo+
Pj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+Pj4gSGVsZW4ncyB0ZXN0IGluY2x1ZGVzIGEg
c2Vjb25kIGlkbGUgbW91bnQgcG9pbnQNCj4+Pj4+Pj4+Pj4+Pj4+PiAoc2VjPWtyYjVpKSBhbmQg
bWF5YmUgdGhhdCBpcyBuZWVkZWQgdG8gdHJpZ2dlcg0KPj4+Pj4+Pj4+Pj4+Pj4+IHRoZSByYWNl
Pw0KPj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+PiBDaHVjaywgYW55IGNoYW5jZSB0byBn
ZXQgInJwY2RlYnVnIC1tIHJwYyBhdXRoIiBvdXRwdXQgZHVyaW5nIHRoZQ0KPj4+Pj4+Pj4+Pj4+
Pj4gZmFpbHVyZSAoZ3NzZCBvcHRpb25hbGx5KSAoaSByZWFsaXplIHRoYXQgaXQgbWlnaHQgYWx0
ZXIgdGhlIHRpbWluZ3MNCj4+Pj4+Pj4+Pj4+Pj4+IGFuZCBub3QgaGl0IHRoZSBpc3N1ZSBidXQg
d29ydGggYSBzaG90KT8NCj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4+IEknbSBzdXJlIHRo
YXQncyBmaW5lLiBBbiBpbnRlcm5hbCB0ZXN0ZXIgaGl0IHRoaXMsDQo+Pj4+Pj4+Pj4+Pj4+IG5v
dCBhIGN1c3RvbWVyLCBzbyBJIHdpbGwgYXNrLg0KPj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+
Pj4gSSBhZ3JlZSwgdGhvdWdoLCB0aGF0IHRpbWluZyBtaWdodCBiZSBhIHByb2JsZW06DQo+Pj4+
Pj4+Pj4+Pj4+IHRoZXNlIHN5c3RlbXMgYWxsIGhhdmUgcmVhbCBzZXJpYWwgY29uc29sZXMgdmlh
DQo+Pj4+Pj4+Pj4+Pj4+IGlMT00sIHNvIC92L2wvbSB0cmFmZmljIGRvZXMgYnJpbmcgZXZlcnl0
aGluZyB0bw0KPj4+Pj4+Pj4+Pj4+PiBhIHN0YW5kc3RpbGwuDQo+Pj4+Pj4+Pj4+Pj4+IA0KPj4+
Pj4+Pj4+Pj4+PiBNZWFud2hpbGUsIHdoYXQncyB5b3UncmUgb3BpbmlvbiBhYm91dCBBVVRIX0ZB
SUxFRD8NCj4+Pj4+Pj4+Pj4+Pj4gU2hvdWxkIHRoZSBzZXJ2ZXIgcmV0dXJuIFJQQ1NFQ19HU1Nf
Q1RYUFJPQkxFTQ0KPj4+Pj4+Pj4+Pj4+PiBpbiB0aGlzIGNhc2UgaW5zdGVhZD8gSWYgaXQgZGlk
LCBkbyB5b3UgdGhpbmsNCj4+Pj4+Pj4+Pj4+Pj4gdGhlIExpbnV4IGNsaWVudCB3b3VsZCByZWNv
dmVyIGJ5IGNyZWF0aW5nIGENCj4+Pj4+Pj4+Pj4+Pj4gcmVwbGFjZW1lbnQgR1NTIGNvbnRleHQ/
DQo+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4gQWgsIHllcywgSSBlcXVhdGVkIEFVVEhfRkFJ
TEVEIEFuZCBBVVRIX0VSUk9SIGluIG15IG1pbmQuIElmIGNsaWVudA0KPj4+Pj4+Pj4+Pj4+IHJl
Y2VpdmVzIHRoZSByZWFzb24gYXMgQVVUSF9GQUlMRUQgYXMgb3Bwb3NlIHRvIENUWFBST0JMRU0g
aXQgd2lsbA0KPj4+Pj4+Pj4+Pj4+IGZhaWwgd2l0aCBFSU8gZXJyb3IgYW5kIHdpbGwgbm90IHRy
eSB0byBjcmVhdGUgYSBuZXcgR1NTIGNvbnRleHQuIFNvDQo+Pj4+Pj4+Pj4+Pj4geWVzLCBJIGJl
bGlldmUgaXQgd291bGQgaGVscCBpZiB0aGUgc2VydmVyIHJldHVybnMgYW55IG9mIHRoZQ0KPj4+
Pj4+Pj4+Pj4+IGZvbGxvd2luZyBlcnJvcnM6DQo+Pj4+Pj4+Pj4+Pj4gICAgICAgICAgIGNhc2Ug
UlBDX0FVVEhfUkVKRUNURURDUkVEOg0KPj4+Pj4+Pj4+Pj4+ICAgICAgICAgICBjYXNlIFJQQ19B
VVRIX1JFSkVDVEVEVkVSRjoNCj4+Pj4+Pj4+Pj4+PiAgICAgICAgICAgY2FzZSBSUENTRUNfR1NT
X0NSRURQUk9CTEVNOg0KPj4+Pj4+Pj4+Pj4+ICAgICAgICAgICBjYXNlIFJQQ1NFQ19HU1NfQ1RY
UFJPQkxFTToNCj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+PiB0aGVuIHRoZSBjbGllbnQgd2ls
bCByZWNyZWF0ZSB0aGUgY29udGV4dC4NCj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4gQWxzbyBp
biBteSB0ZXN0aW5nLCBJIGNhbiBzZWUgdGhhdCBjcmVkZW50aWFsIGNhY2hlIGlzIHBlciBnc3Mg
Zmxhdm9yLg0KPj4+Pj4+Pj4+Pj4gSnVzdCB0byBjaGVjaywgd2hhdCBrZXJuZWwgdmVyc2lvbiBp
cyB0aGlzIHByb2JsZW0gZW5jb3VudGVyZWQgb24gKEkNCj4+Pj4+Pj4+Pj4+IGtub3cgeW91IHNh
aWQgdXBzdHJlYW0pIGJ1dCBJIGp1c3Qgd2FudCB0byBkb3VibGUgY2hlY2sgc28gdGhhdCBJIGNh
bg0KPj4+Pj4+Pj4+Pj4gbG9vayBhdCB0aGUgY29ycmVjdCBzb3VyY2UgY29kZS4NCj4+Pj4+Pj4+
Pj4gDQo+Pj4+Pj4+Pj4+IHY0LjEuMTIgKHN0YWJsZSkgSSB0aGluay4NCj4+Pj4+Pj4+PiANCj4+
Pj4+Pj4+PiBBbHNvLCBjYW4geW91IHNoYXJlIHRoZSBuZXR3b3JrIHRyYWNlPw0KPj4+Pj4+Pj4g
DQo+Pj4+Pj4+PiBIaSBDaHVjaywNCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gSSB3YXMgZmluYWxseSBh
YmxlIHRvIHJlcHJvZHVjZSB0aGUgY29uZGl0aW9uIHlvdSB3ZXJlIHNlZWluZyAoaS5lLiwNCj4+
Pj4+Pj4+IHRoZSB1c2Ugb2YgdGhlIHNhbWUgY29udGV4dCBmb3IgZGlmZmVyZW50IGdzcyBzZXJ2
aWNlcykuDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IEkgZW5hYmxlZCBycGNkZWJ1ZyBycGMgYXV0aCBh
bmQgSSBjYW4gc2VlIHRoYXQgdGhlIDJuZCByZXF1ZXN0IGVuZHMgdXANCj4+Pj4+Pj4+IGZpbmRp
bmcgYSBnc3NfdXBjYWxsIG1lc3NhZ2UgYmVjYXVzZSBpdCdzIGp1c3QgbWF0Y2hlZCBieSB0aGUg
dWlkLg0KPj4+Pj4+Pj4gVGhlcmUgaXMgZXZlbiBhIGNvbW1lbnQgaW4gYXV0aF9nc3MvYXV0aF9n
c3MuYyBpbiBnc3NfYWRkX21zZygpIHNheWluZw0KPj4+Pj4+Pj4gdGhhdCBpZiB0aGVyZSBpcyB1
cGNhbGwgZm9yIGFuIHVpZCB0aGVuIGl0IHdvbid0IGFkZCBhbm90aGVyIHVwY2FsbC4NCj4+Pj4+
Pj4+IFNvIEkgdGhpbmsgdGhlIGRlY2lzaW9uIGlzIG1hZGUgcmlnaHQgdGhlcmUgdG8gc2hhcmUg
dGhlIHNhbWUgY29udGV4dA0KPj4+Pj4+Pj4gbm8gbWF0dGVyIHRoZSBnc3Mgc2VydmljZS4NCj4+
Pj4+Pj4gDQo+Pj4+Pj4+IElmIEkgdW5kZXJzdGFuZCBjb3JyZWN0bHksIHRoYXQncyBqdXN0IHdo
YXQgQW5keSBwcmVkaWN0ZWQuDQo+Pj4+Pj4+IA0KPj4+Pj4+PiBUaGF0IGNoZWNrIG5lZWRzIHRv
IGJlIGNoYW5nZWQgdG8gYWxsb3cgYW5vdGhlciB1cGNhbGwgdG8gYmUNCj4+Pj4+Pj4gcXVldWVk
IGlmIHRoZSBVSUQgbWF0Y2hlcyBidXQgdGhlIEdTUyBzZXJ2aWNlIGRvZXMgbm90Lg0KPj4+Pj4+
IA0KPj4+Pj4+IFlvdSBzaG91bGQgYmUgYWJsZSB0byB1c2UgdGhlIHNhbWUgY29udGV4dCB3aXRo
IGRpZmZlcmVudCBzZXJ2aWNlcy4NCj4+Pj4+PiANCj4+Pj4+PiBBcG9sb2dpZXMsIEkgaGF2ZW4n
dCBjYXVnaHQgdXAgd2l0aCB0aGUgd2hvbGUgZGlzY3Vzc2lvbiBhYm92ZSwgdGhpcyBvbmUNCj4+
Pj4+PiBwb2ludCBqdXN0IGp1bXBlZCBvdXQgYXQgbWUuICBJZiB5b3UncmUgdHJ5aW5nIHRvIHJl
cXVlc3QgYSB3aG9sZSBuZXcNCj4+Pj4+PiBnc3MgY29udGV4dCBqdXN0IHNvIHlvdSBjYW4gdXNl
LCBlLmcuLCBpbnRlZ3JpdHkgaW5zdGVhZCBvZiBwcml2YWN5LA0KPj4+Pj4+IHRoZW4gc29tZXRo
aW5nJ3Mgd3JvbmcuDQo+Pj4+PiANCj4+Pj4+IFRoZSBjbGllbnQgY29kZSBoYXMgc2VwYXJhdGUg
Z3NzX2NyZWQgY2FjaGVzIChhbmQgc28gc2VwYXJhdGUgZ3NzX2NvbnRleOKAmXMpIHBlciBnc3Nf
YXV0aCwgd2hpY2ggaXMgcGVyIHNlcnZpY2UuIEFGQUlLIHRoZSBjbGllbnQgaGFzIGFsd2F5cyBv
YnRhaW5lZCBhIHNlcGFyYXRlIGNvbnRleHQgcGVyIHNlcnZpY2UgcGVyIHNlcnZlci4gV2hpbGUg
d2UgY2FuIHVzZSB0aGUgc2FtZSBnc3MgY29udGV4dCB3aXRoIGRpZmZlcmVudCBzZXJ2aWNlcywg
dGhhdCBpcyBub3QgdGhlIGRlc2lnbiBjaG9pY2UuDQo+Pj4+IA0KPj4+PiBHaXZlbiB0aGUgY3Vy
cmVudCBjb2RlLCBJJ2Qgc2F5IHRoYXQgaXQncyBub3QgY2xlYXIgd2hhdCB0aGUgZGVzaWduDQo+
Pj4+IGNob2ljZSBpcy4gVGhlIHVwY2FsbCBjb2RlIHN0YXRlcyB0aGF0IGl0IHdpbGwgbm90IGRv
IGFub3RoZXIgdXBjYWxsDQo+Pj4+IGZvciBhIGdpdmVuIFVJRCBpZiBhbm90aGVyIHVwY2FsbCBp
cyBhbHJlYWR5IG1hZGUuIFNvIGl0J3MgcHVyZWx5DQo+Pj4+IHRpbWluZyBsdWNrIHRoYXQgd2Ug
aGF2ZSB0d28gZGlmZmVyZW50IGNvbnRleHRzIGZvciB0aGUgbW91bnQgd2hlbg0KPj4+PiBzZWM9
a3JiNSBhbmQgdGhlIGRlZmF1bHQgZm9yIGxlYXNlIG9wZXJhdGlvbnMgaXMgImludGVncml0eSIu
IEFuZCB3ZQ0KPj4+PiBkb24ndCBwYXNzIHRoZSBHU1Mgc2VydmljZSBiZXR3ZWVuIHRoZSBrZXJu
ZWwgYW5kIGdzc2QgYW5kIHRodXMgbm8gd2F5DQo+Pj4+IGZvciB1cyB0byB0aWUgdGhlIHVwY2Fs
bCB0byB0aGUgc2VydmljZS4NCj4+PiANCj4+PiBXZWxsLCBJIHdyb3RlIHNvbWUgb2YgdGhlIHVw
Y2FsbCBjb2RlLCBhbmQgSSB0aG91Z2ggKGluY29ycmVjdGx5LCBJDQo+Pj4gZ3Vlc3MpIHRoYXQg
Y2xpZW50cyBjb3VsZCByZXVzZSB0aGUgc2FtZSBjb250ZXh0IHdpdGggbXVsdGlwbGUgc2Vydmlj
ZXMsDQo+Pj4gc28gdGhlIGNvbmZ1c2lvbiBtYXkganVzdCBiZSBteSBmYXVsdC4gIFNvIHlvdSB0
aGluayB0aGUgb25seSBmaXggaGVyZQ0KPj4+IGlzIHRvIGtleSB0aGUgdXBjYWxscw0KPj4gDQo+
PiBObyBuZWVkIHRvIGNoYW5nZSB0aGUgdXBjYWxsLCBqdXN0IGRvbuKAmSB0IGxldCB0aGUgMm5k
IHJlcXVlc3QgdXNlIHRoZSBuZXcgZG93bmNhbGwgZ3NzIGNvbnRleHQgdW5sZXNzIHRoZSBzZXJ2
aWNlIGFzc2lnbmVkIHRvIGl0IG1hdGNoZXMuIEkgd291bGQgdGhpbmsgeW91IGNvdWxkIHNldCB0
aGUgc2VydmljZSBpbW1lZGlhdGVseSB1cG9uIHJlY2VpdmluZyB0aGUgbmV3IGdzcyBjb250ZXh0
IGRvd24gY2FsbCBhbmQgdGhlbiBjaGVjayBpdCBwcmlvciB0byB0aGUgMm5kIHJlcXVlc3QgdXNl
cyBpdC4NCj4+IA0KPiANCj4gaHR0cHM6Ly90b29scy5pZXRmLm9yZy9odG1sL3JmYzIyMDMjc2Vj
dGlvbi01LjIuMiBJbiBhIGNyZWF0aW9uIHJlcXVlc3QsIHRoZSBzZXFfbnVtIGFuZCBzZXJ2aWNl
IGZpZWxkcyBhcmUgdW5kZWZpbmVkIGFuZCBib3RoIG11c3QgYmUgaWdub3JlZCBieSB0aGUgc2Vy
dmVyLg0KPiANCj4gSU9XOiB0aGUgc2VydmVyIGNhbuKAmXQgbG9jayB0aGUgc2VydmljZSB1bnRp
bCB0aGUgZGF0YSBleGNoYW5nZSBwaGFzZSAoaHR0cHM6Ly90b29scy5pZXRmLm9yZy9odG1sL3Jm
YzIyMDMjc2VjdGlvbi01LjMpLg0KDQpZZXMsIHRoYXQgaXMgdHJ1ZS4gSSBkb27igJl0IHNlZSBo
b3cgdGhhdCBjaGFuZ2VzIHdoYXQgSSBwcm9wb3NlZC4gT25jZSB0aGUgY2xpZW50IGNob29zZXMg
dG8gdXNlIGEgZ3NzIGNvbnRleHQgZm9yIGEgc2VydmljZSBsZXZlbCwgaXQgc2hvdWxkIG5vdCB1
c2UgdGhlIHNhbWUgY29udGV4dCBmb3IgYW5vdGhlciBzZXJ2aWNlIGxldmVsIGFzIGl0IGlzIGFs
bW9zdCBndWFyYW50ZWVkIHRvIGZhaWwuDQoNCuKAlD5BbmR5DQoNCj4gDQo+IENoZWVycw0KPiAg
VHJvbmQNCg0K

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2016-08-03 21:19 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-19 14:51 Problem re-establishing GSS contexts after a server reboot Chuck Lever
2016-07-20  9:14 ` Adamson, Andy
2016-07-20 16:56   ` Olga Kornievskaia
2016-07-21  6:55     ` Chuck Lever
2016-07-21 16:04       ` Olga Kornievskaia
2016-07-21 17:56         ` Chuck Lever
2016-07-21 19:54           ` Olga Kornievskaia
2016-07-21 20:46             ` Olga Kornievskaia
2016-07-21 21:32               ` Chuck Lever
2016-07-25 18:18                 ` Olga Kornievskaia
2016-07-29 16:27                   ` Olga Kornievskaia
2016-07-29 16:38                     ` Chuck Lever
2016-07-29 17:07                       ` Adamson, Andy
2016-07-29 17:32                         ` Adamson, Andy
2016-07-29 22:24                           ` Olga Kornievskaia
2016-08-02 18:06                       ` J. Bruce Fields
2016-08-03 18:53                         ` Adamson, Andy
2016-08-03 19:56                           ` Olga Kornievskaia
2016-08-03 20:06                             ` J. Bruce Fields
2016-08-03 20:11                               ` Olga Kornievskaia
2016-08-03 20:18                               ` Adamson, Andy
2016-08-03 20:33                                 ` Trond Myklebust
2016-08-03 21:12                                   ` Adamson, Andy
2016-08-03 19:14                         ` Chuck Lever
2016-08-03 19:34                           ` J. Bruce Fields

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.