All of lore.kernel.org
 help / color / mirror / Atom feed
* Kerberized NFS-Server Problem still present in 3.10.0-rc2
@ 2013-06-19 13:47 Sven Geggus
  2013-06-19 21:34 ` J. Bruce Fields
  0 siblings, 1 reply; 19+ messages in thread
From: Sven Geggus @ 2013-06-19 13:47 UTC (permalink / raw)
  To: linux-nfs

Hello,

I just tried using Kernel 3.10.0-rc2 an my NFS-server test VM and I'm still
unable to run a working kerberized NFS-Server with this kernel.

To make it work I would have to go back to Kernel 3.8.x (which i can't
because I want to use drbd 8.4).

Any hints on how to furter debug this?

As I already said, I failed in "git bisecting" the Problem because the
behaviour changed from broken behaviour (infinite hang) to another broken
behaviour (permission denied) while doing so.

Sven


-- 
"We don't know the OS that God uses, but the Vatican uses Linux"
                               (Sister Judith Zoebelein, Vatican Webmaster)

/me is giggls@ircnet, http://sven.gegg.us/ on the Web

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-06-19 13:47 Kerberized NFS-Server Problem still present in 3.10.0-rc2 Sven Geggus
@ 2013-06-19 21:34 ` J. Bruce Fields
  2013-06-20  8:03   ` Sven Geggus
  0 siblings, 1 reply; 19+ messages in thread
From: J. Bruce Fields @ 2013-06-19 21:34 UTC (permalink / raw)
  To: Sven Geggus; +Cc: linux-nfs

On Wed, Jun 19, 2013 at 01:47:20PM +0000, Sven Geggus wrote:
> Hello,
> 
> I just tried using Kernel 3.10.0-rc2 an my NFS-server test VM and I'm still
> unable to run a working kerberized NFS-Server with this kernel.
> 
> To make it work I would have to go back to Kernel 3.8.x (which i can't
> because I want to use drbd 8.4).
> 
> Any hints on how to furter debug this?
> 
> As I already said, I failed in "git bisecting" the Problem because the
> behaviour changed from broken behaviour (infinite hang) to another broken
> behaviour (permission denied) while doing so.

Apologies, I don't remember the previous discussion, so, could you
summarize for me?:

	- you're able to reproduce/not reproduce the problem by changing
	  *only* the kernel on the nfs server between 3.8.x and
	  3.10.0-rc2 ?
	- have you figured out exactly where the failure happens?:
	- which version of NFS are you using?

Also if you haven't already it would be useful to know at exactly which
step of the process it's failing.  As a first step running wireshark on
the traffic between client and server and looking for a NULL
init_sec_context rpc call and seeing whether it succeeds or not, would
be useful.

--b.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-06-19 21:34 ` J. Bruce Fields
@ 2013-06-20  8:03   ` Sven Geggus
  2013-06-20 12:09     ` Sven Geggus
  2013-06-20 14:49     ` J. Bruce Fields
  0 siblings, 2 replies; 19+ messages in thread
From: Sven Geggus @ 2013-06-20  8:03 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: linux-nfs

J. Bruce Fields schrieb am Mittwoch, den 19. Juni um 23:34 Uhr:

> Apologies, I don't remember the previous discussion, so, could you
> summarize for me?

Shure!
My original bug-report ist here:
http://www.spinics.net/lists/linux-nfs/msg37454.html

> - you're able to reproduce/not reproduce the problem by changing
>   *only* the kernel on the nfs server between 3.8.x and
>   3.10.0-rc2 ?

Exactly. See also the Postings of Richard van den Toorn on the list.

Summary: Mount is locking when upgrading from 3.8.x to 3.9.x

Unfortunately I was unable to do a git bisect because somewhere on the way
the behaviour changed from locking to "permission denied".

If you give me a hint if this behaviour should be marked as good or bad I
can continue bisecting!

> - have you figured out exactly where the failure happens?:

No because I have not been able to do git bisect to the end.

> - which version of NFS are you using?

NFS4 with Kerberos authentication.

> Also if you haven't already it would be useful to know at exactly which
> step of the process it's failing.  As a first step running wireshark on
> the traffic between client and server and looking for a NULL
> init_sec_context rpc call and seeing whether it succeeds or not, would
> be useful.

I already posted a wireshark dump:
http://www.spinics.net/lists/linux-nfs/msg37472.html

Regards

Sven

-- 
"Ich fürchte mich nicht vor der Rückkehr der Faschisten in der Maske der
Faschisten, sondern vor der Rückkehr der Faschisten in der Maske der
Demokraten" (Theodor W. Adorno)
/me is giggls@ircnet, http://sven.gegg.us/ on the Web

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-06-20  8:03   ` Sven Geggus
@ 2013-06-20 12:09     ` Sven Geggus
  2013-06-20 14:49     ` J. Bruce Fields
  1 sibling, 0 replies; 19+ messages in thread
From: Sven Geggus @ 2013-06-20 12:09 UTC (permalink / raw)
  To: linux-nfs

Sven Geggus <lists@fuchsschwanzdomain.de> wrote:

> Unfortunately I was unable to do a git bisect because somewhere on the way
> the behaviour changed from locking to "permission denied".

I now just decided, that broken behaviour has to be treated as broken,
regardles of the kind of behaviour thus my git bisect now ended here:

683428fae8c73d7d7da0fa2e0b6beb4d8df4e808 is the first bad commit
commit 683428fae8c73d7d7da0fa2e0b6beb4d8df4e808
Author: Eric W. Biederman <ebiederm@xmission.com>
Date:   Sat Feb 2 01:40:53 2013 -0800

    sunrpc: Update svcgss xdr handle to rpsec_contect cache
    
    For each received uid call make_kuid and validate the result.
    For each received gid call make_kgid and validate the result.
    
    Cc: "J. Bruce Fields" <bfields@fieldses.org>
    Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
    Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>

:040000 040000 15a764b57c7d75a102296c1d151a244e6ec074c964a3a6f19b5a2a4aaf15002b476664d19e68e205 M	net

Note, that this is the one which caused permission denied errors rather than
infinite hangs:
mount.nfs4: access denied by server while mounting vnfsrv:/storage

I suspect that it has been "fixed" in a way which is also broken.

Regards

Sven

-- 
"C Is Quirky, Flawed, And An Enormous Success."
(Dennis M. Ritchie)

/me is giggls@ircnet, http://sven.gegg.us/ on the Web

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-06-20  8:03   ` Sven Geggus
  2013-06-20 12:09     ` Sven Geggus
@ 2013-06-20 14:49     ` J. Bruce Fields
  2013-06-20 14:52       ` J. Bruce Fields
  1 sibling, 1 reply; 19+ messages in thread
From: J. Bruce Fields @ 2013-06-20 14:49 UTC (permalink / raw)
  To: Sven Geggus; +Cc: linux-nfs

On Thu, Jun 20, 2013 at 10:03:55AM +0200, Sven Geggus wrote:
> J. Bruce Fields schrieb am Mittwoch, den 19. Juni um 23:34 Uhr:
> 
> > Apologies, I don't remember the previous discussion, so, could you
> > summarize for me?
> 
> Shure!
> My original bug-report ist here:
> http://www.spinics.net/lists/linux-nfs/msg37454.html
> 
> > - you're able to reproduce/not reproduce the problem by changing
> >   *only* the kernel on the nfs server between 3.8.x and
> >   3.10.0-rc2 ?
> 
> Exactly. See also the Postings of Richard van den Toorn on the list.
> 
> Summary: Mount is locking when upgrading from 3.8.x to 3.9.x
> 
> Unfortunately I was unable to do a git bisect because somewhere on the way
> the behaviour changed from locking to "permission denied".
> 
> If you give me a hint if this behaviour should be marked as good or bad I
> can continue bisecting!
> 
> > - have you figured out exactly where the failure happens?:
> 
> No because I have not been able to do git bisect to the end.
> 
> > - which version of NFS are you using?
> 
> NFS4 with Kerberos authentication.

What happens with NFSv3?

> > Also if you haven't already it would be useful to know at exactly which
> > step of the process it's failing.  As a first step running wireshark on
> > the traffic between client and server and looking for a NULL
> > init_sec_context rpc call and seeing whether it succeeds or not, would
> > be useful.
> 
> I already posted a wireshark dump:
> http://www.spinics.net/lists/linux-nfs/msg37472.html

So it looks it did a null init_sec_context establishment and then the
server didn't reply to the first rpc call using the new context.  But
it's hard to be sure without more details--could I get the binary dump?

That might be explained by the problem fixed by
3c34ae11fac3b30629581d0bfaf80f58e82cfbfb "nfsd: fix krb5 handling of
anonymous principals", but that was already in v3.9.

--b.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-06-20 14:49     ` J. Bruce Fields
@ 2013-06-20 14:52       ` J. Bruce Fields
  2013-06-21  8:32         ` Sven Geggus
  0 siblings, 1 reply; 19+ messages in thread
From: J. Bruce Fields @ 2013-06-20 14:52 UTC (permalink / raw)
  To: Sven Geggus; +Cc: linux-nfs

On Thu, Jun 20, 2013 at 10:49:55AM -0400, J. Bruce Fields wrote:
> On Thu, Jun 20, 2013 at 10:03:55AM +0200, Sven Geggus wrote:
> > J. Bruce Fields schrieb am Mittwoch, den 19. Juni um 23:34 Uhr:
> > 
> > > Apologies, I don't remember the previous discussion, so, could you
> > > summarize for me?
> > 
> > Shure!
> > My original bug-report ist here:
> > http://www.spinics.net/lists/linux-nfs/msg37454.html
> > 
> > > - you're able to reproduce/not reproduce the problem by changing
> > >   *only* the kernel on the nfs server between 3.8.x and
> > >   3.10.0-rc2 ?
> > 
> > Exactly. See also the Postings of Richard van den Toorn on the list.
> > 
> > Summary: Mount is locking when upgrading from 3.8.x to 3.9.x
> > 
> > Unfortunately I was unable to do a git bisect because somewhere on the way
> > the behaviour changed from locking to "permission denied".
> > 
> > If you give me a hint if this behaviour should be marked as good or bad I
> > can continue bisecting!
> > 
> > > - have you figured out exactly where the failure happens?:
> > 
> > No because I have not been able to do git bisect to the end.
> > 
> > > - which version of NFS are you using?
> > 
> > NFS4 with Kerberos authentication.
> 
> What happens with NFSv3?
> 
> > > Also if you haven't already it would be useful to know at exactly which
> > > step of the process it's failing.  As a first step running wireshark on
> > > the traffic between client and server and looking for a NULL
> > > init_sec_context rpc call and seeing whether it succeeds or not, would
> > > be useful.
> > 
> > I already posted a wireshark dump:
> > http://www.spinics.net/lists/linux-nfs/msg37472.html
> 
> So it looks it did a null init_sec_context establishment and then the
> server didn't reply to the first rpc call using the new context.  But
> it's hard to be sure without more details--could I get the binary dump?
> 
> That might be explained by the problem fixed by
> 3c34ae11fac3b30629581d0bfaf80f58e82cfbfb "nfsd: fix krb5 handling of
> anonymous principals", but that was already in v3.9.

Hm, or I wonder if 8b5309d41751b8a086d8e7a43abe37d9ff24559d "svcrpc: fix
failures to handle -1 uid's and gid's" (appended) would help.

--b.

commit 8b5309d41751b8a086d8e7a43abe37d9ff24559d
Author: J. Bruce Fields <bfields@redhat.com>
Date:   Fri May 24 17:24:34 2013 -0400

    svcrpc: fix failures to handle -1 uid's and gid's
    
    As of f025adf191924e3a75ce80e130afcd2485b53bb8 "sunrpc: Properly decode
    kuids and kgids in RPC_AUTH_UNIX credentials" any rpc containing a -1
    (0xffff) uid or gid would fail with a badcred error.
    
    Reported symptoms were xmbc clients failing on upgrade of the NFS
    server; examination of the network trace showed them sending -1 as the
    gid.
    
    Reported-by: Julian Sikorski <belegdol@gmail.com>
    Cc: "Eric W. Biederman" <ebiederm@xmission.com>
    Signed-off-by: J. Bruce Fields <bfields@redhat.com>

diff --git a/net/sunrpc/svcauth_unix.c b/net/sunrpc/svcauth_unix.c
index c3f9e1e..06bdf5a 100644
--- a/net/sunrpc/svcauth_unix.c
+++ b/net/sunrpc/svcauth_unix.c
@@ -810,11 +810,15 @@ svcauth_unix_accept(struct svc_rqst *rqstp, __be32 *authp)
 		goto badcred;
 	argv->iov_base = (void*)((__be32*)argv->iov_base + slen);	/* skip machname */
 	argv->iov_len -= slen*4;
-
+	/*
+	 * Note: we skip uid_valid()/gid_valid() checks here for
+	 * backwards compatibility with clients that use -1 id's.
+	 * Instead, -1 uid or gid is later mapped to the
+	 * (export-specific) anonymous id by nfsd_setuser.
+	 * Supplementary gid's will be left alone.
+	 */
 	cred->cr_uid = make_kuid(&init_user_ns, svc_getnl(argv)); /* uid */
 	cred->cr_gid = make_kgid(&init_user_ns, svc_getnl(argv)); /* gid */
-	if (!uid_valid(cred->cr_uid) || !gid_valid(cred->cr_gid))
-		goto badcred;
 	slen = svc_getnl(argv);			/* gids length */
 	if (slen > 16 || (len -= (slen + 2)*4) < 0)
 		goto badcred;
@@ -823,8 +827,6 @@ svcauth_unix_accept(struct svc_rqst *rqstp, __be32 *authp)
 		return SVC_CLOSE;
 	for (i = 0; i < slen; i++) {
 		kgid_t kgid = make_kgid(&init_user_ns, svc_getnl(argv));
-		if (!gid_valid(kgid))
-			goto badcred;
 		GROUP_AT(cred->cr_group_info, i) = kgid;
 	}
 	if (svc_getu32(argv) != htonl(RPC_AUTH_NULL) || svc_getu32(argv) != 0) {

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-06-20 14:52       ` J. Bruce Fields
@ 2013-06-21  8:32         ` Sven Geggus
  2013-06-24 21:54           ` J. Bruce Fields
  0 siblings, 1 reply; 19+ messages in thread
From: Sven Geggus @ 2013-06-21  8:32 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: linux-nfs

[-- Attachment #1: Type: text/plain, Size: 961 bytes --]

J. Bruce Fields schrieb am Donnerstag, den 20. Juni um 16:52 Uhr:

> What happens with NFSv3?

I have no Idea. I have a NFS4-only Server Setup and would need to enable v3
explicitly.

> So it looks it did a null init_sec_context establishment and then the
> server didn't reply to the first rpc call using the new context.  But
> it's hard to be sure without more details--could I get the binary dump?

Attached (Server kernel was 3.10.0-rc2 with patch below).

> Hm, or I wonder if 8b5309d41751b8a086d8e7a43abe37d9ff24559d "svcrpc: fix
> failures to handle -1 uid's and gid's" (appended) would help.

No! I just patched this into a vanilla 3.10.0-rc2. Same Problem. The NFS-mount
just hangs.


Sven

-- 
"Thinking of using NT for your critical apps?
                                  Isn't there enough suffering in the world?"
                   (Advertisement of Sun Microsystems in Wall Street Journal)
/me is giggls@ircnet, http://sven.gegg.us/ on the Web

[-- Attachment #2: 3.10.0-rc2-nfs-server.pcap --]
[-- Type: application/cap, Size: 3648 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-06-21  8:32         ` Sven Geggus
@ 2013-06-24 21:54           ` J. Bruce Fields
  2013-06-25  9:46             ` Sven Geggus
  0 siblings, 1 reply; 19+ messages in thread
From: J. Bruce Fields @ 2013-06-24 21:54 UTC (permalink / raw)
  To: Sven Geggus; +Cc: linux-nfs

On Fri, Jun 21, 2013 at 10:32:22AM +0200, Sven Geggus wrote:
> J. Bruce Fields schrieb am Donnerstag, den 20. Juni um 16:52 Uhr:
> 
> > What happens with NFSv3?
> 
> I have no Idea. I have a NFS4-only Server Setup and would need to enable v3
> explicitly.
> 
> > So it looks it did a null init_sec_context establishment and then the
> > server didn't reply to the first rpc call using the new context.  But
> > it's hard to be sure without more details--could I get the binary dump?
> 
> Attached (Server kernel was 3.10.0-rc2 with patch below).

OK, yes, thanks, the init_sec_context exchange does look acceptable but
the server never responds to the first real rpc (a SETCLIENTID) using
the new context.

Could you run

	strace -p $(pidof rpc.svcgssd) -s4096 -e trace=open,close,read,write

on the server while trying this?  That should capture the contents of
the relevant upcalls.  I wonder if there's something odd about the
context svcgssd is passing down.

> > Hm, or I wonder if 8b5309d41751b8a086d8e7a43abe37d9ff24559d "svcrpc: fix
> > failures to handle -1 uid's and gid's" (appended) would help.
> 
> No! I just patched this into a vanilla 3.10.0-rc2. Same Problem. The NFS-mount
> just hangs.

OK, thanks for checking.

--b.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-06-24 21:54           ` J. Bruce Fields
@ 2013-06-25  9:46             ` Sven Geggus
  2013-07-01 12:09               ` Sven Geggus
       [not found]               ` <20130701205234.GF19945@fieldses.org>
  0 siblings, 2 replies; 19+ messages in thread
From: Sven Geggus @ 2013-06-25  9:46 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: nfs

[-- Attachment #1: Type: text/plain, Size: 638 bytes --]

J. Bruce Fields schrieb am Montag, den 24. Juni um 23:54 Uhr:

> Could you run
> 
> 	strace -p $(pidof rpc.svcgssd) -s4096 -e trace=open,close,read,write
> 
> on the server while trying this?

Hm how confidential is this information? This is a test-vm only, but should
I re-generate the krb5.keytab of the machine when posting this here?

Anyway, file attached.

Regards

Sven

-- 
Microsoft ist offenbar die einzige Firma, die in der Lage ist, ein mit
Office nicht kompatibles Bürosoftwarepaket einzuführen.
                            (Florian Weimer in de.alt.sysadmin.recovery)
/me is giggls@ircnet, http://sven.gegg.us/ on the Web

[-- Attachment #2: svcgssd-trace.txt --]
[-- Type: text/plain, Size: 10278 bytes --]

read(4, "\\x \\x6082051f06092a864886f71201020201006e82050e3082050aa003020105a10302010ea20703050020000000a382040761820403308203ffa003020105a1101b0e50432e494954422e4648472e4445a2273025a003020103a11e301c1b036e66731b15766e667372762e70632e696974622e6668672e6465a38203bb308203b7a003020117a103020102a28203a9048203a569e454c74d63a5e336da118e0d8632d542b66043e3affb1bd63db929ef215409ee7741ba6e47d4caf433076d42c32b7d96cc50c8db418052658d9833354ebd124c660bf375a8fe4fe956315808b5449c53fc15a9e5f7c6dfa1ec03c775a11ceba73e3557186fcac8276ad1fffc1bea70405461dbe104df6df64635439384c124c64eac7514e3dc73fb4413c0c8d57fcda1a799a7b35479692bf59578b689dd831c1295e7544f8928f0f5e774c387663da07d8281a2d5ef524a324ba11c8f749079b85b3c8b23fcf745032c984bc9a5b3bb20c42ed60671e48bb217e5e374fed3d7c38a1e819452779db11f65a66e6c117f4bb727f202081a90699bbf720bc40dbd6fd7db62b6fc1dc86970b6a0e8852159003f2d575d175265a4451dfa622ae2098d6e1443dc274f6870b1402c02b07fa7608837f3a55ac61c3cd40ea8e24256da49f122b271a3101e41eb916ddd7267c1ed0318fa9adc44604851d328d3f8c3662baab909dce2d74dac949324985b50ccb289a4d44de80209203e165dff9b71c9d8c71a2c8badae16f61f7ef389567284d44b163620320eb825de51f4b78425022f19cf703941d316499c7972e96326f046aa951f5181b02e7ac24cb3292cfbee09e8976d4876f68ccf78f6fc802ef920452afb98f05cd0d15c4104c5ae9107ae19e9dbe37e33279b4469f894a2772aff47a92562d44cce438b7cf624d2dd12f95f52db81f136028fc72143359940f694e22b680b756860010bff9b51a7c8ff0a448c54f5490cafa469630ee757fd1158255c0649c7c0270e5783ca5b47038d510f2c80cf4f00ea9157b7f96c3c3d25e082b7cf0a3ed9f5b4c21f8376c4b14fb61ff160a88cc8ec30f128c2b33a2ebd3d1d4052556553aef35d2010b15c105ebfbd7384102052923906bf87ba7dcdd1125ca07641939835d6174a95f9b63b12ee8fa23d7d34c7e1bfaeb278a9387aacd4e01bbe86ec7bbe6f9e255f56a9c441330cf947b916e8b1e807ccdbfe300283b02356eb5138d8153ab18366e9bd69c0573c1e5ddbde934e4e33892bc800817b545db2b54192003892f8b000277fa30e09ee6c90ac7eae605521ec724968439188b04d7fa22da24c36860d283765e3ac9fae8ad0a76e65c253a74d8dc8192f6ffeaf804ae0b08a54886fb289156c062a461d59e1b701824cedb45694922669d659113177c368142fbf9faaf0eddc06e928d7aded58b840e2ef3250433d4e82c0681ecff0916457badb460b43bb27f832e0785a0f0becc25789197276b8b80037cb54db8a3899b56468695bc92e7dea61f9c3d64e31a481e93081e6a003020117a281de0481db9b2606a032a287d3989d99a68691f0b7bc4df43b6fbae0a5a42c3cd069d6cb1437ac745360501c0224e9fab79cb3ba62b2de6af5fa81279701e978f79e0127aab0b8d4f7528c6b733897167abb9be39ed13c245d3bc6dd7d8edf1be5844a3978a36e1f06cbc5c91f45e5cdc892a92a83ceca5ad86b50905a135c7f7c295977f837f5824670e0ed94d6cfdfe8df17b9b1821fa3728d3d56df9282b18cb9da1f755749bccc6cae0650911a52a1466385983467d0c5d814cca39f85adece4a438379f1458d837b1be0764f8b240b16bea9dcf84c0ef697ebff179d94e\n", 4096) = 2636
open("/proc/fs/nfsd/supported_krb5_enctypes", O_RDONLY) = 5
read(5, "18,17,16,23,3,1,2", 4096)      = 17
read(5, "", 4096)                       = 0
close(5)                                = 0
open("/dev/urandom", O_RDONLY)          = 5
read(5, "2\270?\307;\347\n1 \21\372\267\2460\351\262\270\tB\343", 20) = 20
close(5)                                = 0
open("/dev/urandom", O_RDONLY)          = 5
read(5, "\343\370\235>\335d\307\255\377\246\307\242|\213\264\317\336>/\211", 20) = 20
close(5)                                = 0
open("/dev/urandom", O_RDONLY)          = 5
read(5, "\311m\315K.d\207hA\321\323H\201\356\246~zQ\346\177", 20) = 20
close(5)                                = 0
open("/dev/urandom", O_RDONLY)          = 5
read(5, "\251rhT\356\2263\10\352S\271B\36\301\373\244k\234\221x", 20) = 20
close(5)                                = 0
open("/etc/krb5.keytab", O_RDONLY)      = 5
read(5, "\5\2\0\0\0D\0\2\0\16PC.IITB.FHG.DE\0\4host\0\25vnfsrv.pc.iitb.fhg.de\0\0\0\1Q\260nq\2\0\1\0\10\346\2177^\v\302\255\250\0\0\0D\0\2\0\16PC.IITB.FHG.DE\0\4host\0\25vnfsrv.pc.iitb.fhg.de\0\0\0\1Q\260nq\2\0\3\0\10\346\2177^\v\302\255\250\0\0\0L\0\2\0\16PC.IITB.FHG.DE\0\4host\0\25vnfsrv.pc.iitb.fhg.de\0\0\0\1Q\260nq\2\0\27\0\20\200\205\364\330\203\321\331]\36\16\0105'\242\377C\0\0\0005\0\2\0\16PC.IITB.FHG.DE\0\4host\0\6VNFSRV\0\0\0\1Q\260nq\2\0\1\0\10\346\2177^\v\302\255\250\0\0\0005\0\2\0\16PC.IITB.FHG.DE\0\4host\0\6VNFSRV\0\0\0\1Q\260nq\2\0\3\0\10\346\2177^\v\302\255\250\0\0\0=\0\2\0\16PC.IITB.FHG.DE\0\4host\0\6VNFSRV\0\0\0\1Q\260nq\2\0\27\0\20\200\205\364\330\203\321\331]\36\16\0105'\242\377C\0\0\0000\0\1\0\16PC.IITB.FHG.DE\0\7VNFSRV$\0\0\0\1Q\260nq\2\0\1\0\10\346\2177^\v\302\255\250\0\0\0000\0\1\0\16PC.IITB.FHG.DE\0\7VNFSRV$\0\0\0\1Q\260nq\2\0\3\0\10\346\2177^\v\302\255\250\0\0\0008\0\1\0\16PC.IITB.FHG.DE\0\7VNFSRV$\0\0\0\1Q\260nq\2\0\27\0\20\200\205\364\330\203\321\331]\36\16\0105'\242\377C\0\0\0C\0\2\0\16PC.IITB.FHG.DE\0\3nfs\0\25vnfsrv.pc.iitb.fhg.de\0\0\0\1Q\260n\231\2\0\1\0\10\346\2177^\v\302\255\250\0\0\0C\0\2\0\16PC.IITB.FHG.DE\0\3nfs\0\25vnfsrv.pc.iitb.fhg.de\0\0\0\1Q\260n\231\2\0\3\0\10\346\2177^\v\302\255\250\0\0\0K\0\2\0\16PC.IITB.FHG.DE\0\3nfs\0\25vnfsrv.pc.iitb.fhg.de\0\0\0\1Q\260n\231\2\0\27\0\20\200\205\364\330\203\321\331]\36\16\0105'\242\377C\0\0\0004\0\2\0\16PC.IITB.FHG.DE\0\3nfs\0\6VNFSRV\0\0\0\1Q\260n\231\2\0\1\0\10\346\2177^\v\302\255\250\0\0\0004\0\2\0\16PC.IITB.FHG.DE\0\3nfs\0\6VNFSRV\0\0\0\1Q\260n\232\2\0\3\0\10\346\2177^\v\302\255\250\0\0\0<\0\2\0\16PC.IITB.FHG.DE\0\3nfs\0\6VNFSRV\0\0\0\1Q\260n\232\2\0\27\0\20\200\205\364\330\203\321\331]\36\16\0105'\242\377C", 8192) = 966
close(5)                                = 0
write(3, "\1\0\0\0\0k\0\0\0HASH:AD338CB0C424A414DBAC3710F3651714 21:VENUS$@PC.IITB.FHG.DE 40:nfs/vnfsrv.pc.iitb.fhg.de@PC.IITB.FHG.DE\0E\200\7\0\200e\311Q\26\0\0\0VENUS$@PC.IITB.FHG.DE\0)\0\0\0nfs/vnfsrv.pc.iitb.fhg.de@PC.IITB.FHG.DE\0E\200\7\0\200e\311Q", 203) = 203
open("/usr/lib/krb5/plugins/authdata", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/usr/lib/krb5/plugins/authdata", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/dev/urandom", O_RDONLY)          = 5
read(5, "\362\223F\31\237\250\277iP\200\256\270\251\204>\345\276\376r\230", 20) = 20
close(5)                                = 0
open("/dev/urandom", O_RDONLY)          = 5
read(5, "\225tEV\331\306\220\305\251!o\316.X\273\v*\312B\345", 20) = 20
close(5)                                = 0
open("/dev/urandom", O_RDONLY)          = 5
read(5, "HB.i\335\252\266m\22Vo)\210j\261\177\253\346\303m", 20) = 20
close(5)                                = 0
read(5, "\2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\377\377\377\377\377\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0", 36) = 36
close(5)                                = 0
open("/dev/urandom", O_RDONLY)          = 5
read(5, "\366\3\247\370\217\21.\310\335\200\357\350&Z\1\350\304\r2c", 20) = 20
close(5)                                = 0
open("/proc/net/rpc/auth.rpcsec.context/channel", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 5
write(5, "\\x04000000 1372181158 -1 -1 0 krb5 \\x06000000a6d2c9517da62c3700000000120000008b9e7cb53082bca36502670079249e8e2c44aab7f8713ff9b09225c10f6a2057 \n", 143) = 143
close(5)                                = 0
open("/proc/net/rpc/auth.rpcsec.init/channel", O_WRONLY) = 5
write(5, "\\x \\x6082051f06092a864886f71201020201006e82050e3082050aa003020105a10302010ea20703050020000000a382040761820403308203ffa003020105a1101b0e50432e494954422e4648472e4445a2273025a003020103a11e301c1b036e66731b15766e667372762e70632e696974622e6668672e6465a38203bb308203b7a003020117a103020102a28203a9048203a569e454c74d63a5e336da118e0d8632d542b66043e3affb1bd63db929ef215409ee7741ba6e47d4caf433076d42c32b7d96cc50c8db418052658d9833354ebd124c660bf375a8fe4fe956315808b5449c53fc15a9e5f7c6dfa1ec03c775a11ceba73e3557186fcac8276ad1fffc1bea70405461dbe104df6df64635439384c124c64eac7514e3dc73fb4413c0c8d57fcda1a799a7b35479692bf59578b689dd831c1295e7544f8928f0f5e774c387663da07d8281a2d5ef524a324ba11c8f749079b85b3c8b23fcf745032c984bc9a5b3bb20c42ed60671e48bb217e5e374fed3d7c38a1e819452779db11f65a66e6c117f4bb727f202081a90699bbf720bc40dbd6fd7db62b6fc1dc86970b6a0e8852159003f2d575d175265a4451dfa622ae2098d6e1443dc274f6870b1402c02b07fa7608837f3a55ac61c3cd40ea8e24256da49f122b271a3101e41eb916ddd7267c1ed0318fa9adc44604851d328d3f8c3662baab909dce2d74dac949324985b50ccb289a4d44de80209203e165dff9b71c9d8c71a2c8badae16f61f7ef389567284d44b163620320eb825de51f4b78425022f19cf703941d316499c7972e96326f046aa951f5181b02e7ac24cb3292cfbee09e8976d4876f68ccf78f6fc802ef920452afb98f05cd0d15c4104c5ae9107ae19e9dbe37e33279b4469f894a2772aff47a92562d44cce438b7cf624d2dd12f95f52db81f136028fc72143359940f694e22b680b756860010bff9b51a7c8ff0a448c54f5490cafa469630ee757fd1158255c0649c7c0270e5783ca5b47038d510f2c80cf4f00ea9157b7f96c3c3d25e082b7cf0a3ed9f5b4c21f8376c4b14fb61ff160a88cc8ec30f128c2b33a2ebd3d1d4052556553aef35d2010b15c105ebfbd7384102052923906bf87ba7dcdd1125ca07641939835d6174a95f9b63b12ee8fa23d7d34c7e1bfaeb278a9387aacd4e01bbe86ec7bbe6f9e255f56a9c441330cf947b916e8b1e807ccdbfe300283b02356eb5138d8153ab18366e9bd69c0573c1e5ddbde934e4e33892bc800817b545db2b54192003892f8b000277fa30e09ee6c90ac7eae605521ec724968439188b04d7fa22da24c36860d283765e3ac9fae8ad0a76e65c253a74d8dc8192f6ffeaf804ae0b08a54886fb289156c062a461d59e1b701824cedb45694922669d659113177c368142fbf9faaf0eddc06e928d7aded58b840e2ef3250433d4e82c0681ecff0916457badb460b43bb27f832e0785a0f0becc25789197276b8b80037cb54db8a3899b56468695bc92e7dea61f9c3d64e31a481e93081e6a003020117a281de0481db9b2606a032a287d3989d99a68691f0b7bc4df43b6fbae0a5a42c3cd069d6cb1437ac745360501c0224e9fab79cb3ba62b2de6af5fa81279701e978f79e0127aab0b8d4f7528c6b733897167abb9be39ed13c245d3bc6dd7d8edf1be5844a3978a36e1f06cbc5c91f45e5cdc892a92a83ceca5ad86b50905a135c7f7c295977f837f5824670e0ed94d6cfdfe8df17b9b1821fa3728d3d56df9282b18cb9da1f755749bccc6cae0650911a52a1466385983467d0c5d814cca39f85adece4a438379f1458d837b1be0764f8b240b16bea9dcf84c0ef697ebff179d94e 1372153276 0 0 \\x04000000 \\x60819506092a864886f71201020202006f8185308182a003020105a10302010fa2763074a003020117a26d046b7f28020fbc8a21e2262984e72f679f2f479ca8f65781d8036441108e5e766292155e35b9549677886209f827195b561bda2333f4b66e51805385abbfb1e7ff1b676621aad8408d218530cfde68aaf4fc6513f1d07c87a2007b1c5d44c8d2f5d6acb47bc569f9faeff1ac7a \n", 2970) = 2970
close(5)                                = 0
open("/dev/urandom", O_RDONLY)          = 5
read(5, "\354\345pp@\231!\215\232r-5>\366\226\306\245\305\363\355", 20) = 20
close(5)                                = 0

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-06-25  9:46             ` Sven Geggus
@ 2013-07-01 12:09               ` Sven Geggus
       [not found]               ` <20130701205234.GF19945@fieldses.org>
  1 sibling, 0 replies; 19+ messages in thread
From: Sven Geggus @ 2013-07-01 12:09 UTC (permalink / raw)
  To: linux-nfs

Just an update on this. I tried to compile Linux 3.10 now and updated
my userland to nfs-utils 1.2.8 (from Debian unstable).

The Problem persists. NFS-Server stops working ith any vaniall Kernel newer
than 3.8.x including 3.10.

Sven

-- 
Um Kontrolle Ihres Kontos wiederzugewinnen, klicken Sie bitte auf das
Verbindungsgebrüll. (aus einer Ebay fishing Mail)

/me is giggls@ircnet, http://sven.gegg.us/ on the Web

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
       [not found]               ` <20130701205234.GF19945@fieldses.org>
@ 2013-07-02  8:05                 ` Sven Geggus
       [not found]                   ` <20130705181859.GA8288@fieldses.org>
  0 siblings, 1 reply; 19+ messages in thread
From: Sven Geggus @ 2013-07-02  8:05 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: nfs

J. Bruce Fields schrieb am Montag, den 01. Juli um 22:52 Uhr:

> It includes the contents of /etc/krb5.keytab, so I'm guessing so.

The server is a test-vm anyway.

> It'd be interesting to know exactly where nfsd decides to drop the
> rpc that uses this new context....  Turning on all the nfsd and rpc
> debugging *might* help there.  (rpcdebug -m rpc -s all; rpcdebug -m nfsd
> -s all).

 ~/ # rpcdebug -m rpc -s all
rpc        xprt call debug nfs auth bind sched trans svcsock svcdsp misc cache
 ~/ # rpcdebug -m nfsd -s all
nfsd       sock fh export svc proc fileop auth repcache xdr lockd

Here is what I get (Kernel is vanilla 3.10.0):

Jul  2 09:52:41 vnfsrv kernel: Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
Jul  2 09:52:41 vnfsrv kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
Jul  2 09:52:41 vnfsrv kernel: NFSD: starting 90-second grace period (net ffffffff81556500)
Jul  2 09:54:11 vnfsrv kernel: NFSD: laundromat service - starting
Jul  2 09:54:11 vnfsrv kernel: NFSD: end of grace period
Jul  2 09:54:11 vnfsrv kernel: NFSD: laundromat_main - sleeping for 90 seconds
Jul  2 09:55:42 vnfsrv kernel: NFSD: laundromat service - starting
Jul  2 09:55:42 vnfsrv kernel: NFSD: laundromat_main - sleeping for 90 seconds
Jul  2 09:56:42 vnfsrv kernel: svc: socket ffff8800167aad40 TCP (listen) state change 10
Jul  2 09:56:42 vnfsrv kernel: svc: transport ffff8800159c0000 served by daemon ffff88001657a000
Jul  2 09:56:42 vnfsrv kernel: svc: socket ffff8800162147c0 TCP (listen) state change 1
Jul  2 09:56:42 vnfsrv kernel: svc: tcp_accept ffff8800159c0000 sock ffff880016a2e840
Jul  2 09:56:42 vnfsrv kernel: nfsd: connect from 10.1.7.30, port=740
Jul  2 09:56:42 vnfsrv kernel: svc: svc_setup_socket ffff880016a2ed40
Jul  2 09:56:42 vnfsrv kernel: setting up TCP socket for reading
Jul  2 09:56:42 vnfsrv kernel: svc: svc_setup_socket created ffff880015259000 (inet ffff8800162147c0)
Jul  2 09:56:42 vnfsrv kernel: svc: transport ffff880015259000 served by daemon ffff8800167cc000
Jul  2 09:56:42 vnfsrv kernel: svc: server ffff8800167cc000, pool 0, transport ffff880015259000, inuse=3
Jul  2 09:56:42 vnfsrv kernel: svc: tcp_recv ffff880015259000 data 1 conn 0 close 0
Jul  2 09:56:42 vnfsrv kernel: svc: socket ffff880015259000 recvfrom(ffff8800152592a4, 0) = 4
Jul  2 09:56:42 vnfsrv kernel: svc: TCP record, 40 bytes
Jul  2 09:56:42 vnfsrv kernel: svc: socket ffff880015259000 recvfrom(ffff88001607c028, 4056) = 40
Jul  2 09:56:42 vnfsrv kernel: svc: TCP final record (40 bytes)
Jul  2 09:56:42 vnfsrv kernel: svc: got len=40
Jul  2 09:56:42 vnfsrv kernel: svc: svc_authenticate (0)
Jul  2 09:56:42 vnfsrv kernel: svc: transport ffff8800159c0000 served by daemon ffff880016576000
Jul  2 09:56:42 vnfsrv kernel: svc: tcp_accept ffff8800159c0000 sock ffff880016a2e840
Jul  2 09:56:42 vnfsrv kernel: svc: server ffff880016576000 waiting for data (to = 900000)
Jul  2 09:56:42 vnfsrv kernel: svc: calling dispatcher
Jul  2 09:56:42 vnfsrv kernel: svc: server ffff88001657a000 waiting for data (to = 900000)
Jul  2 09:56:42 vnfsrv kernel: nfsd_dispatch: vers 4 proc 0
Jul  2 09:56:42 vnfsrv kernel: svc: socket ffff880015259000 sendto([ffff8800165d4000 28... ], 28) = 28 (addr 10.1.7.30, port=740)
Jul  2 09:56:42 vnfsrv kernel: svc: server ffff8800167cc000 waiting for data (to = 900000)
Jul  2 09:56:42 vnfsrv kernel: svc: socket ffff8800167aad40 TCP (listen) state change 10
Jul  2 09:56:42 vnfsrv kernel: svc: transport ffff8800159c0000 served by daemon ffff8800167cc000
Jul  2 09:56:42 vnfsrv kernel: svc: socket ffff880015277500 TCP (listen) state change 1
Jul  2 09:56:42 vnfsrv kernel: svc: tcp_accept ffff8800159c0000 sock ffff880016a2e840
Jul  2 09:56:42 vnfsrv kernel: \x014nfsd: connect from unprivileged port: 10.1.7.30, port=55093
Jul  2 09:56:42 vnfsrv kernel: nfsd: connect from 10.1.7.30, port=55093
Jul  2 09:56:42 vnfsrv kernel: svc: svc_setup_socket ffff880016a10d00
Jul  2 09:56:43 vnfsrv kernel: setting up TCP socket for reading
Jul  2 09:56:43 vnfsrv kernel: svc: svc_setup_socket created ffff8800158ce000 (inet ffff880015277500)
Jul  2 09:56:43 vnfsrv kernel: svc: transport ffff8800158ce000 served by daemon ffff88001657a000
Jul  2 09:56:43 vnfsrv kernel: svc: server ffff88001657a000, pool 0, transport ffff8800158ce000, inuse=3
Jul  2 09:56:43 vnfsrv kernel: svc: tcp_recv ffff8800158ce000 data 1 conn 0 close 0
Jul  2 09:56:43 vnfsrv kernel: svc: socket ffff8800158ce000 recvfrom(ffff8800158ce2a4, 0) = 4
Jul  2 09:56:43 vnfsrv kernel: svc: TCP record, 1380 bytes
Jul  2 09:56:43 vnfsrv kernel: svc: socket ffff8800158ce000 recvfrom(ffff880016570564, 2716) = 1380
Jul  2 09:56:43 vnfsrv kernel: svc: TCP final record (1380 bytes)
Jul  2 09:56:43 vnfsrv kernel: svc: transport ffff8800159c0000 served by daemon ffff880016576000
Jul  2 09:56:43 vnfsrv kernel: svc: tcp_accept ffff8800159c0000 sock ffff880016a2e840
Jul  2 09:56:43 vnfsrv kernel: svc: server ffff880016576000 waiting for data (to = 900000)
Jul  2 09:56:43 vnfsrv kernel: svc: server ffff8800167cc000 waiting for data (to = 900000)
Jul  2 09:56:43 vnfsrv kernel: svc: got len=1380
Jul  2 09:56:43 vnfsrv kernel: svc: svc_authenticate (6)
Jul  2 09:56:43 vnfsrv kernel: RPC:       svcauth_gss: argv->iov_len = 1352
Jul  2 09:56:43 vnfsrv kernel: RPC:       Want update, refage=120, age=0
Jul  2 09:56:43 vnfsrv kernel: RPC:       krb5_encrypt returns 0
Jul  2 09:56:43 vnfsrv kernel: RPC:       krb5_encrypt returns 0
Jul  2 09:56:43 vnfsrv kernel: RPC:       krb5_encrypt returns 0
Jul  2 09:56:43 vnfsrv kernel: RPC:       krb5_encrypt returns 0
Jul  2 09:56:43 vnfsrv kernel: RPC:       krb5_encrypt returns 0
Jul  2 09:56:43 vnfsrv kernel: RPC:       krb5_encrypt returns 0
Jul  2 09:56:43 vnfsrv kernel: RPC:       krb5_encrypt returns 0
Jul  2 09:56:43 vnfsrv kernel: RPC:       krb5_encrypt returns 0
Jul  2 09:56:43 vnfsrv kernel: RPC:       krb5_encrypt returns 0
Jul  2 09:56:43 vnfsrv kernel: RPC:       krb5_encrypt returns 0
Jul  2 09:56:43 vnfsrv kernel: RPC:       krb5_encrypt returns 0
Jul  2 09:56:43 vnfsrv kernel: RPC:       krb5_encrypt returns 0
Jul  2 09:56:43 vnfsrv kernel: RPC:       gss_import_sec_context_kerberos: returning 0
Jul  2 09:56:43 vnfsrv kernel: RPC:       gss_get_mic_v2
Jul  2 09:56:43 vnfsrv kernel: svc: socket ffff8800158ce000 sendto([ffff880016571000 232... ], 232) = 232 (addr 10.1.7.30, port=55093)
Jul  2 09:56:43 vnfsrv kernel: svc: socket ffff880015277500 TCP data ready (svsk ffff8800158ce000)
Jul  2 09:56:43 vnfsrv kernel: svc: transport ffff8800158ce000 served by daemon ffff880016576000
Jul  2 09:56:43 vnfsrv kernel: svc: socket ffff880015277500 TCP (connected) state change 8 (svsk ffff8800158ce000)
Jul  2 09:56:43 vnfsrv kernel: svc: transport ffff8800158ce000 busy, not enqueued
Jul  2 09:56:43 vnfsrv kernel: svc: socket ffff880015277500 TCP data ready (svsk ffff8800158ce000)
Jul  2 09:56:43 vnfsrv kernel: svc: transport ffff8800158ce000 busy, not enqueued
Jul  2 09:56:43 vnfsrv kernel: svc: socket ffff8800162147c0 TCP data ready (svsk ffff880015259000)
Jul  2 09:56:43 vnfsrv kernel: svc: transport ffff880015259000 served by daemon ffff8800167cc000
Jul  2 09:56:43 vnfsrv kernel: svc_recv: found XPT_CLOSE
Jul  2 09:56:43 vnfsrv kernel: svc: svc_delete_xprt(ffff8800158ce000)
Jul  2 09:56:43 vnfsrv kernel: svc: svc_tcp_sock_detach(ffff8800158ce000)
Jul  2 09:56:43 vnfsrv kernel: svc: server ffff8800167cc000, pool 0, transport ffff880015259000, inuse=2
Jul  2 09:56:43 vnfsrv kernel: svc: tcp_recv ffff880015259000 data 1 conn 0 close 0
Jul  2 09:56:43 vnfsrv kernel: svc: socket ffff880015259000 recvfrom(ffff8800152592a4, 0) = 4
Jul  2 09:56:43 vnfsrv kernel: svc: svc_sock_detach(ffff8800158ce000)
Jul  2 09:56:43 vnfsrv kernel: svc: server ffff880016576000 waiting for data (to = 900000)
Jul  2 09:56:43 vnfsrv kernel: svc: transport ffff8800158ce000 busy, not enqueued
Jul  2 09:56:43 vnfsrv kernel: svc: svc_sock_free(ffff8800158ce000)
Jul  2 09:56:43 vnfsrv kernel: svc: server ffff88001657a000 waiting for data (to = 900000)
Jul  2 09:56:43 vnfsrv kernel: svc: TCP record, 200 bytes
Jul  2 09:56:43 vnfsrv kernel: svc: socket ffff880015259000 recvfrom(ffff88001607c0c8, 3896) = 200
Jul  2 09:56:43 vnfsrv kernel: svc: TCP final record (200 bytes)
Jul  2 09:56:43 vnfsrv kernel: svc: got len=200
Jul  2 09:56:43 vnfsrv kernel: svc: svc_authenticate (6)
Jul  2 09:56:43 vnfsrv kernel: RPC:       svcauth_gss: argv->iov_len = 172
Jul  2 09:56:43 vnfsrv kernel: RPC:       gss_verify_mic_v2
Jul  2 09:56:43 vnfsrv kernel: RPC:       gss_get_mic_v2
Jul  2 09:56:43 vnfsrv kernel: RPC:       Want update, refage=120, age=0
Jul  2 09:56:43 vnfsrv kernel: RPC:       Want update, refage=120, age=0
Jul  2 09:56:48 vnfsrv kernel: svc: svc_process dropit
Jul  2 09:56:48 vnfsrv kernel: svc: xprt ffff880015259000 dropped request
Jul  2 09:56:48 vnfsrv kernel: svc: server ffff8800167cc000 waiting for data (to = 900000)
Jul  2 09:57:12 vnfsrv kernel: NFSD: laundromat service - starting
Jul  2 09:57:12 vnfsrv kernel: NFSD: laundromat_main - sleeping for 90 seconds
Jul  2 09:57:42 vnfsrv kernel: RPC:       xs_close xprt ffff88001610b800
Jul  2 09:57:42 vnfsrv kernel: RPC:       disconnected transport ffff88001610b800
Jul  2 09:57:42 vnfsrv kernel: RPC:       wake_up_first(ffff88001610b978 "xprt_sending")
Jul  2 09:57:44 vnfsrv kernel: svc: socket ffff8800162147c0 TCP (connected) state change 8 (svsk ffff880015259000)
Jul  2 09:57:44 vnfsrv kernel: svc: transport ffff880015259000 served by daemon ffff8800167cc000
Jul  2 09:57:44 vnfsrv kernel: svc: socket ffff8800162147c0 TCP data ready (svsk ffff880015259000)
Jul  2 09:57:44 vnfsrv kernel: svc: transport ffff880015259000 busy, not enqueued
Jul  2 09:57:44 vnfsrv kernel: svc_recv: found XPT_CLOSE
Jul  2 09:57:44 vnfsrv kernel: svc: svc_delete_xprt(ffff880015259000)
Jul  2 09:57:44 vnfsrv kernel: svc: svc_tcp_sock_detach(ffff880015259000)
Jul  2 09:57:44 vnfsrv kernel: svc: svc_sock_detach(ffff880015259000)
Jul  2 09:57:44 vnfsrv kernel: svc: server ffff8800167cc000 waiting for data (to = 900000)
Jul  2 09:57:44 vnfsrv kernel: svc: socket ffff8800167aad40 TCP (listen) state change 10
Jul  2 09:57:44 vnfsrv kernel: svc: transport ffff8800159c0000 served by daemon ffff880016576000
Jul  2 09:57:44 vnfsrv kernel: svc: socket ffff8800164ed540 TCP (listen) state change 1
Jul  2 09:57:44 vnfsrv kernel: svc: tcp_accept ffff8800159c0000 sock ffff880016a2e840
Jul  2 09:57:44 vnfsrv kernel: nfsd: connect from 10.1.7.30, port=740
Jul  2 09:57:44 vnfsrv kernel: svc: svc_setup_socket ffff880016a2eac0
Jul  2 09:57:44 vnfsrv kernel: setting up TCP socket for reading
Jul  2 09:57:44 vnfsrv kernel: svc: svc_setup_socket created ffff88001669a000 (inet ffff8800164ed540)
Jul  2 09:57:44 vnfsrv kernel: svc: transport ffff88001669a000 served by daemon ffff8800167cc000
Jul  2 09:57:44 vnfsrv kernel: svc: server ffff8800167cc000, pool 0, transport ffff88001669a000, inuse=3
Jul  2 09:57:44 vnfsrv kernel: svc: tcp_recv ffff88001669a000 data 1 conn 0 close 0
Jul  2 09:57:44 vnfsrv kernel: svc: socket ffff88001669a000 recvfrom(ffff88001669a2a4, 0) = 4
Jul  2 09:57:44 vnfsrv kernel: svc: TCP record, 200 bytes
Jul  2 09:57:44 vnfsrv kernel: svc: socket ffff88001669a000 recvfrom(ffff88001607c0c8, 3896) = 200
Jul  2 09:57:44 vnfsrv kernel: svc: TCP final record (200 bytes)
Jul  2 09:57:44 vnfsrv kernel: svc: transport ffff8800159c0000 served by daemon ffff88001657a000
Jul  2 09:57:44 vnfsrv kernel: svc: tcp_accept ffff8800159c0000 sock ffff880016a2e840
Jul  2 09:57:44 vnfsrv kernel: svc: server ffff88001657a000 waiting for data (to = 900000)
Jul  2 09:57:44 vnfsrv kernel: svc: server ffff880016576000 waiting for data (to = 900000)
Jul  2 09:57:44 vnfsrv kernel: svc: got len=200
Jul  2 09:57:44 vnfsrv kernel: svc: svc_authenticate (6)
Jul  2 09:57:44 vnfsrv kernel: RPC:       svcauth_gss: argv->iov_len = 172
Jul  2 09:57:44 vnfsrv kernel: RPC:       gss_verify_mic_v2
Jul  2 09:57:44 vnfsrv kernel: RPC:       gss_get_mic_v2
Jul  2 09:57:44 vnfsrv kernel: RPC:       Want update, refage=120, age=61
Jul  2 09:57:49 vnfsrv kernel: svc: svc_process dropit
Jul  2 09:57:49 vnfsrv kernel: svc: xprt ffff88001669a000 dropped request
Jul  2 09:57:49 vnfsrv kernel: svc: server ffff8800167cc000 waiting for data (to = 900000)
Jul  2 09:58:42 vnfsrv kernel: NFSD: laundromat service - starting
Jul  2 09:58:42 vnfsrv kernel: NFSD: laundromat_main - sleeping for 90 seconds
Jul  2 09:59:45 vnfsrv kernel: svc: socket ffff8800164ed540 TCP (connected) state change 8 (svsk ffff88001669a000)
Jul  2 09:59:45 vnfsrv kernel: svc: transport ffff88001669a000 served by daemon ffff8800167cc000
Jul  2 09:59:45 vnfsrv kernel: svc: socket ffff8800164ed540 TCP data ready (svsk ffff88001669a000)
Jul  2 09:59:45 vnfsrv kernel: svc: transport ffff88001669a000 busy, not enqueued
Jul  2 09:59:45 vnfsrv kernel: svc_recv: found XPT_CLOSE
Jul  2 09:59:45 vnfsrv kernel: svc: svc_delete_xprt(ffff88001669a000)
Jul  2 09:59:45 vnfsrv kernel: svc: svc_tcp_sock_detach(ffff88001669a000)
Jul  2 09:59:45 vnfsrv kernel: svc: svc_sock_detach(ffff88001669a000)
Jul  2 09:59:45 vnfsrv kernel: svc: server ffff8800167cc000 waiting for data (to = 900000)
Jul  2 09:59:46 vnfsrv kernel: svc: socket ffff8800167aad40 TCP (listen) state change 10
Jul  2 09:59:46 vnfsrv kernel: svc: transport ffff8800159c0000 served by daemon ffff8800167cc000
Jul  2 09:59:46 vnfsrv kernel: svc: socket ffff88001503f440 TCP (listen) state change 1
Jul  2 09:59:46 vnfsrv kernel: svc: tcp_accept ffff8800159c0000 sock ffff880016a2e840
Jul  2 09:59:46 vnfsrv kernel: nfsd: connect from 10.1.7.30, port=740
Jul  2 09:59:46 vnfsrv kernel: svc: svc_setup_socket ffff880016accd40
Jul  2 09:59:46 vnfsrv kernel: setting up TCP socket for reading
Jul  2 09:59:46 vnfsrv kernel: svc: svc_setup_socket created ffff8800159f0000 (inet ffff88001503f440)
Jul  2 09:59:46 vnfsrv kernel: svc: transport ffff8800159f0000 served by daemon ffff880016576000
Jul  2 09:59:46 vnfsrv kernel: svc: server ffff880016576000, pool 0, transport ffff8800159f0000, inuse=3
Jul  2 09:59:46 vnfsrv kernel: svc: tcp_recv ffff8800159f0000 data 1 conn 0 close 0
Jul  2 09:59:46 vnfsrv kernel: svc: socket ffff8800159f0000 recvfrom(ffff8800159f02a4, 0) = 4
Jul  2 09:59:46 vnfsrv kernel: svc: TCP record, 200 bytes
Jul  2 09:59:46 vnfsrv kernel: svc: transport ffff8800159c0000 served by daemon ffff88001657a000
Jul  2 09:59:46 vnfsrv kernel: svc: tcp_accept ffff8800159c0000 sock ffff880016a2e840
Jul  2 09:59:46 vnfsrv kernel: svc: server ffff88001657a000 waiting for data (to = 900000)
Jul  2 09:59:46 vnfsrv kernel: svc: server ffff8800167cc000 waiting for data (to = 900000)
Jul  2 09:59:46 vnfsrv kernel: svc: socket ffff8800159f0000 recvfrom(ffff8800166f80c8, 3896) = 200
Jul  2 09:59:46 vnfsrv kernel: svc: TCP final record (200 bytes)
Jul  2 09:59:46 vnfsrv kernel: svc: got len=200
Jul  2 09:59:46 vnfsrv kernel: svc: svc_authenticate (6)
Jul  2 09:59:46 vnfsrv kernel: RPC:       svcauth_gss: argv->iov_len = 172
Jul  2 09:59:46 vnfsrv kernel: RPC:       gss_verify_mic_v2
Jul  2 09:59:46 vnfsrv kernel: RPC:       gss_get_mic_v2
Jul  2 09:59:46 vnfsrv kernel: RPC:       Want update, refage=120, age=0
Jul  2 09:59:51 vnfsrv kernel: svc: svc_process dropit
Jul  2 09:59:51 vnfsrv kernel: svc: xprt ffff8800159f0000 dropped request
Jul  2 09:59:51 vnfsrv kernel: svc: server ffff880016576000 waiting for data (to = 900000)
Jul  2 10:00:12 vnfsrv kernel: NFSD: laundromat service - starting
Jul  2 10:00:12 vnfsrv kernel: NFSD: laundromat_main - sleeping for 90 seconds
Jul  2 10:00:47 vnfsrv kernel: svc: socket ffff88001503f440 TCP (connected) state change 8 (svsk ffff8800159f0000)
Jul  2 10:00:47 vnfsrv kernel: svc: transport ffff8800159f0000 served by daemon ffff880016576000
Jul  2 10:00:47 vnfsrv kernel: svc: socket ffff88001503f440 TCP data ready (svsk ffff8800159f0000)
Jul  2 10:00:47 vnfsrv kernel: svc: transport ffff8800159f0000 busy, not enqueued
Jul  2 10:00:47 vnfsrv kernel: svc_recv: found XPT_CLOSE
Jul  2 10:00:47 vnfsrv kernel: svc: svc_delete_xprt(ffff8800159f0000)
Jul  2 10:00:47 vnfsrv kernel: svc: svc_tcp_sock_detach(ffff8800159f0000)
Jul  2 10:00:47 vnfsrv kernel: svc: svc_sock_detach(ffff8800159f0000)
Jul  2 10:00:47 vnfsrv kernel: svc: server ffff880016576000 waiting for data (to = 900000)
Jul  2 10:00:48 vnfsrv kernel: svc: socket ffff8800167aad40 TCP (listen) state change 10
Jul  2 10:00:48 vnfsrv kernel: svc: transport ffff8800159c0000 served by daemon ffff880016576000
Jul  2 10:00:48 vnfsrv kernel: svc: socket ffff880015a44d80 TCP (listen) state change 1
Jul  2 10:00:48 vnfsrv kernel: svc: tcp_accept ffff8800159c0000 sock ffff880016a2e840
Jul  2 10:00:48 vnfsrv kernel: nfsd: connect from 10.1.7.30, port=740
Jul  2 10:00:48 vnfsrv kernel: svc: svc_setup_socket ffff880016acc840
Jul  2 10:00:48 vnfsrv kernel: setting up TCP socket for reading
Jul  2 10:00:48 vnfsrv kernel: svc: svc_setup_socket created ffff8800152d7000 (inet ffff880015a44d80)
Jul  2 10:00:48 vnfsrv kernel: svc: transport ffff8800152d7000 served by daemon ffff8800167cc000
Jul  2 10:00:48 vnfsrv kernel: svc: server ffff8800167cc000, pool 0, transport ffff8800152d7000, inuse=3
Jul  2 10:00:48 vnfsrv kernel: svc: tcp_recv ffff8800152d7000 data 1 conn 0 close 0
Jul  2 10:00:48 vnfsrv kernel: svc: socket ffff8800152d7000 recvfrom(ffff8800152d72a4, 0) = 4
Jul  2 10:00:48 vnfsrv kernel: svc: TCP record, 200 bytes
Jul  2 10:00:48 vnfsrv kernel: svc: socket ffff8800152d7000 recvfrom(ffff88001607c0c8, 3896) = 200
Jul  2 10:00:48 vnfsrv kernel: svc: TCP final record (200 bytes)
Jul  2 10:00:48 vnfsrv kernel: svc: got len=200
Jul  2 10:00:48 vnfsrv kernel: svc: svc_authenticate (6)
Jul  2 10:00:48 vnfsrv kernel: svc: transport ffff8800159c0000 served by daemon ffff88001657a000
Jul  2 10:00:48 vnfsrv kernel: svc: tcp_accept ffff8800159c0000 sock ffff880016a2e840
Jul  2 10:00:48 vnfsrv kernel: svc: server ffff88001657a000 waiting for data (to = 900000)
Jul  2 10:00:48 vnfsrv kernel: svc: server ffff880016576000 waiting for data (to = 900000)
Jul  2 10:00:48 vnfsrv kernel: RPC:       svcauth_gss: argv->iov_len = 172
Jul  2 10:00:48 vnfsrv kernel: RPC:       gss_verify_mic_v2
Jul  2 10:00:48 vnfsrv kernel: RPC:       gss_get_mic_v2
Jul  2 10:00:48 vnfsrv kernel: RPC:       Want update, refage=120, age=62
Jul  2 10:00:53 vnfsrv kernel: svc: svc_process dropit
Jul  2 10:00:53 vnfsrv kernel: svc: xprt ffff8800152d7000 dropped request
Jul  2 10:00:53 vnfsrv kernel: svc: server ffff8800167cc000 waiting for data (to = 900000)
Jul  2 10:01:42 vnfsrv kernel: NFSD: laundromat service - starting
Jul  2 10:01:42 vnfsrv kernel: NFSD: laundromat_main - sleeping for 90 seconds
Jul  2 10:02:00 vnfsrv kernel: revisit canceled
Jul  2 10:02:00 vnfsrv kernel: svc: svc_sock_free(ffff8800159f0000)
Jul  2 10:02:00 vnfsrv kernel: revisit queued
Jul  2 10:02:00 vnfsrv kernel: svc: transport ffff8800152d7000 served by daemon ffff8800167cc000
Jul  2 10:02:00 vnfsrv kernel: svc: server ffff8800167cc000, pool 0, transport ffff8800152d7000, inuse=3
Jul  2 10:02:00 vnfsrv kernel: svc: got len=200
Jul  2 10:02:00 vnfsrv kernel: svc: transport ffff8800152d7000 served by daemon ffff880016576000
Jul  2 10:02:00 vnfsrv kernel: svc: server ffff880016576000, pool 0, transport ffff8800152d7000, inuse=5
Jul  2 10:02:00 vnfsrv kernel: svc: tcp_recv ffff8800152d7000 data 0 conn 0 close 0
Jul  2 10:02:00 vnfsrv kernel: svc: socket ffff8800152d7000 recvfrom(ffff8800152d72a0, 4) = -11
Jul  2 10:02:00 vnfsrv kernel: RPC: TCP recv_record got -11
Jul  2 10:02:00 vnfsrv kernel: svc: svc_authenticate (6)
Jul  2 10:02:00 vnfsrv kernel: RPC:       svcauth_gss: argv->iov_len = 172
Jul  2 10:02:00 vnfsrv kernel: RPC:       gss_get_mic_v2
Jul  2 10:02:00 vnfsrv kernel: RPC: TCP recvfrom got EAGAIN
Jul  2 10:02:00 vnfsrv kernel: svc: got len=0
Jul  2 10:02:00 vnfsrv kernel: svc: server ffff880016576000 waiting for data (to = 900000)
Jul  2 10:02:00 vnfsrv kernel: RPC:       Want update, refage=120, age=0
Jul  2 10:02:05 vnfsrv kernel: svc: svc_process dropit
Jul  2 10:02:05 vnfsrv kernel: svc: xprt ffff8800152d7000 dropped request
Jul  2 10:02:05 vnfsrv kernel: svc: server ffff8800167cc000 waiting for data (to = 900000)
Jul  2 10:02:23 vnfsrv kernel: svc: socket ffff880015a44d80 TCP (connected) state change 8 (svsk ffff8800152d7000)
Jul  2 10:02:23 vnfsrv kernel: svc: transport ffff8800152d7000 served by daemon ffff8800167cc000
Jul  2 10:02:23 vnfsrv kernel: svc: socket ffff880015a44d80 TCP data ready (svsk ffff8800152d7000)
Jul  2 10:02:23 vnfsrv kernel: svc: transport ffff8800152d7000 busy, not enqueued
Jul  2 10:02:23 vnfsrv kernel: svc_recv: found XPT_CLOSE
Jul  2 10:02:23 vnfsrv kernel: svc: svc_delete_xprt(ffff8800152d7000)
Jul  2 10:02:23 vnfsrv kernel: svc: svc_tcp_sock_detach(ffff8800152d7000)
Jul  2 10:02:23 vnfsrv kernel: svc: svc_sock_detach(ffff8800152d7000)
Jul  2 10:02:23 vnfsrv kernel: svc: server ffff8800167cc000 waiting for data (to = 900000)
Jul  2 10:02:43 vnfsrv kernel: svc_age_temp_xprts

Regards

Sven

-- 
"Das ist halt der Unterschied: Unix ist ein Betriebssystem mit Tradition,
 die anderen sind einfach von sich aus unlogisch."
                          (Anselm Lingnau in de.comp.os.unix.discussion)
/me ist giggls@ircnet, http://sven.gegg.us/ im WWW

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
       [not found]                     ` <20130705203435.GE8288@fieldses.org>
@ 2013-07-08  8:08                       ` Sven Geggus
  2013-07-08 13:35                         ` J. Bruce Fields
  2013-07-09 16:57                         ` J. Bruce Fields
  0 siblings, 2 replies; 19+ messages in thread
From: Sven Geggus @ 2013-07-08  8:08 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: linux-nfs

[-- Attachment #1: Type: text/plain, Size: 1192 bytes --]

Monday Monday, so good to me :)

On Friday, 5. Jul 2013 22:34, J. Bruce Fields wrote:

> Hm, might be interesting to see what mountd's doing too:
> 
> 	strace -p $(pidof rpc.svcgssd) -s4096 -e trace=open,close,read,write

OK, output attached. While reading this output I recogniced, that I
overlooked the following error-messages from rpc.svcgssd and rpc.mountd:

Jul  8 09:53:38 vnfsrv rpc.svcgssd[2914]: ERROR: GSS-API: error in gss_free_lucid_sec_context(): GSS_S_NO_CONTEXT (No context has been established) - Unknown error
Jul  8 09:53:38 vnfsrv rpc.svcgssd[2914]: WARN: failed to free lucid sec context
Jul  8 09:53:38 vnfsrv rpc.mountd[2927]: qword_eol: fprintf failed: errno 22 (Invalid argument)

> Hm, maybe this gid upcall?  Does the following (probably not completely
> right) help?

Yes, we are getting there. While the strange messages from rpc.svcgssd
persist, the mount now seems to work again after your patch has been
applied.

Regards

Sven

-- 
"Remember, democracy never lasts long. It soon wastes itself,
exhausts and murders itself. There never was a democracy yet
that did not commit suicide." (John Quincy Adams)
/me is giggls@ircnet, http://sven.gegg.us/ on the Web

[-- Attachment #2: strace-output-rpc.svcgssd --]
[-- Type: text/plain, Size: 18563 bytes --]

read(4, "\\x \\x6082051f06092a864886f71201020201006e82050e3082050aa003020105a10302010ea20703050020000000a382040761820403308203ffa003020105a1101b0e50432e494954422e4648472e4445a2273025a003020103a11e301c1b036e66731b15766e667372762e70632e696974622e6668672e6465a38203bb308203b7a003020117a103020102a28203a9048203a5c0be536630d9eddaa21dcaa5c68959ddcb5da7ea2ea2dbf2e641bd866fc5c5d1c0c21e261366d40775d8e8cec0c403f73255555ff92a20b373a778819bb1fa891e9ccfd6622a683e5a3c466e731f5b884ed79ff25f44bbcf20b493753de50b0cb7b2dc25266427bf658db58a51daba1339897c4d6709bb8cf220384faa7203bf9a92cfa2391d10eb3cf58a31aed3921b23cf1e49e0b91b615b51fea8d2c10307eec97ae4c8b5a0305105f1e5532f5f38dfc3ba6bd1f9667a5abd83fda0efaea4b532a5962c01a46fe8306072d3bc6d62062b10e73c0efe03316b3f9b8181b67d87fbc62cb371755b888e941cbd105d47e92b69439a87c36e438e5a0ee64f832efb8b46f0c4c65222721a06bb29456a744df5fe56f6b91c2d947e7f41b15ed46dc73d19c5a11974cc50d5faa53ffd27014aba6b69adf28d8e9efa4630c595477246f2d1be456b9fa446b55ef4182dd5f0e1bd0f418ce1bca9181759e68424ca9c55c9a17e62ab19d84be413615f650bb0ba10d1fc938b81cda4623f113dbf74cab8254f5e7e3e6a16a86efb140c1abedc65176795067bf52bb3d273b0833859878e9b51894689d24080929b57763a507e7435effb351f6ac4f2658fd92ab51c6b2b0b234ff413cbaf05e150ad19b220a8d63244bc6ebf601bc3ed3e5f5665c35b4bcb5696f8b5bcaff5ea829cd1cadb8ad1de0010e36fb8bf10305b396614d7bc483e2318391278c3c104a8d49c52b0c64091f4771b9aa05a710273c0a9fc074577b0fd7ad076fb14ba7760c8a95ffa49e52c33520420304d90d1dbe1c7d4960ef19fec74dd1af6ec7da006ac0dbf7aea78520c7ee0a378e4ac1cbe84b40b6d7e6aa43a2dcdc059acd71292590993b403f10f54c9f003996fe58b135dd2a3c1e23c6c39414a7a06de39e1625e209c1f23004a413577bd4aa9456107158a2e44490aad8a6db67a0672eb7ae51e1b01c0bf4ce63cc5e4fdfeced6eb753f76f0940366c4c577ac6794030d8850f12c2052ee20fdbea7b0bcc10392d8bb96a114088787dc1183bfa9bb86f0cb5872a0087b9234015ccd9bc33ea7f415e3d4669197cc036e9cbc4849eb08e4e91e1d7c7515851c3e473ac1e92076f2b4ad549a99ca0c4b50431bc5f6e6693f3c5697f96168f91221e991f08b0820d795d24b9caf587d70916c0e1b90fd51c6e615631039c1e748a841e349eb6d5272216bd2eec53ad09826e12a701886c46656627a1379f48e8b79592c2fd424b2b29414dec3d11093f618af48fa912105fd163c9b76df3a6f5a2608629b27e25b7476cc80cff8d8fdd466d5ff7fa481e93081e6a003020117a281de0481db5d54a6ca5f33bcd5f204274dc255945eb2ee22e4657eeae001d53df27a8bbdaf247e598050e5e9957588aad75de62ec64aaf586aed86cc173ce41ff9a590d5ac5588166cb2904470731606da0e29a16174ef44dab40383ba76f849994653073b94c1b50463cfb9e24068e07d93effa65ebd95ebb33a812a412fc6367d71dbcba30f7e95365fd787c00e23c98f4fb10a8bfc292426bad83305778d58e43758cd4b5005672c494812521d29c355a03e5e046927b5aff8ff12d7d9f07e70a4d3e8b7859fe9d8ae56595a3e89cb289f4f720ec28fbe179a160afcb2b4b\n", 4096) = 2636
open("/proc/fs/nfsd/supported_krb5_enctypes", O_RDONLY) = 13
read(13, "18,17,16,23,3,1,2", 4096)     = 17
read(13, "", 4096)                      = 0
close(13)                               = 0
open("/etc/krb5.conf", O_RDONLY)        = 13
read(13, "[libdefaults]\n  default_realm = PC.IITB.FHG.DE\n  default_tkt_enctypes = arcfour-hmac-md5\n  default_tgs_enctypes = arcfour-hmac-md5\n\t\n  forwardable = true\n\n[realms]\n  PC.IITB.FHG.DE = {\n    kdc = pcdc1.pc.iitb.fhg.de\n    kdc = pcdc2.pc.iitb.fhg.de\n  }\n\n[domain_realm]\n  pc.iitb.fhg.de = PC.IITB.FHG.DE\n  .pc.iitb.fhg.de = PC.IITB.FHG.DE\n\n\n", 4096) = 338
read(13, "", 4096)                      = 0
close(13)                               = 0
open("/dev/urandom", O_RDONLY)          = 13
read(13, "=\255\177\337\3\32\23\234\363\336\6\221\"7b\232\177\272\35\255\10\317e\226\330\257\1\2\271\33(\303\r\247^\1\233\234>\372\26.\210\207\255]X\365\225\357<7t\22\34\243yp\327\325UvZ\4", 64) = 64
close(13)                               = 0
open("/etc/krb5.conf", O_RDONLY)        = 13
read(13, "[libdefaults]\n  default_realm = PC.IITB.FHG.DE\n  default_tkt_enctypes = arcfour-hmac-md5\n  default_tgs_enctypes = arcfour-hmac-md5\n\t\n  forwardable = true\n\n[realms]\n  PC.IITB.FHG.DE = {\n    kdc = pcdc1.pc.iitb.fhg.de\n    kdc = pcdc2.pc.iitb.fhg.de\n  }\n\n[domain_realm]\n  pc.iitb.fhg.de = PC.IITB.FHG.DE\n  .pc.iitb.fhg.de = PC.IITB.FHG.DE\n\n\n", 4096) = 338
read(13, "", 4096)                      = 0
close(13)                               = 0
open("/dev/urandom", O_RDONLY)          = 13
read(13, "#\261\\\233\220\376I\227\27@\326c&\177\f\372 \371)B\1\237\302!\373\375\215\3502\316\325|71\320\275\275\340\237\375\353RAR\256\307\222\254'\207{&\265\36d\n6\244Y\36`C\246\345", 64) = 64
close(13)                               = 0
open("/etc/krb5.conf", O_RDONLY)        = 13
read(13, "[libdefaults]\n  default_realm = PC.IITB.FHG.DE\n  default_tkt_enctypes = arcfour-hmac-md5\n  default_tgs_enctypes = arcfour-hmac-md5\n\t\n  forwardable = true\n\n[realms]\n  PC.IITB.FHG.DE = {\n    kdc = pcdc1.pc.iitb.fhg.de\n    kdc = pcdc2.pc.iitb.fhg.de\n  }\n\n[domain_realm]\n  pc.iitb.fhg.de = PC.IITB.FHG.DE\n  .pc.iitb.fhg.de = PC.IITB.FHG.DE\n\n\n", 4096) = 338
read(13, "", 4096)                      = 0
close(13)                               = 0
open("/dev/urandom", O_RDONLY)          = 13
read(13, "\352\224\336\fM\36Ys\2319)\333\10\256Pa8S?\274\215x\10I\5\270u\225\21\206\370\314\343R\214\341\25eM\322\233\267\304cj\347\323dqH~\37\241?r\17\330\0j\226<o\262{", 64) = 64
close(13)                               = 0
open("/etc/krb5.conf", O_RDONLY)        = 13
read(13, "[libdefaults]\n  default_realm = PC.IITB.FHG.DE\n  default_tkt_enctypes = arcfour-hmac-md5\n  default_tgs_enctypes = arcfour-hmac-md5\n\t\n  forwardable = true\n\n[realms]\n  PC.IITB.FHG.DE = {\n    kdc = pcdc1.pc.iitb.fhg.de\n    kdc = pcdc2.pc.iitb.fhg.de\n  }\n\n[domain_realm]\n  pc.iitb.fhg.de = PC.IITB.FHG.DE\n  .pc.iitb.fhg.de = PC.IITB.FHG.DE\n\n\n", 4096) = 338
read(13, "", 4096)                      = 0
close(13)                               = 0
open("/dev/urandom", O_RDONLY)          = 13
read(13, "\362\0071\321\233\241\241\22\207[\363\352\314\327\2\273\265\1Q\36\215\205\355\255:\355\327\16\30\237\365\241\t\243}=\277O\220\v'P\202\254\36\203\212\317\26\311\360\377n\246\242\27\7\325\241\311\332\33V\257", 64) = 64
close(13)                               = 0
open("/etc/krb5.conf", O_RDONLY)        = 13
read(13, "[libdefaults]\n  default_realm = PC.IITB.FHG.DE\n  default_tkt_enctypes = arcfour-hmac-md5\n  default_tgs_enctypes = arcfour-hmac-md5\n\t\n  forwardable = true\n\n[realms]\n  PC.IITB.FHG.DE = {\n    kdc = pcdc1.pc.iitb.fhg.de\n    kdc = pcdc2.pc.iitb.fhg.de\n  }\n\n[domain_realm]\n  pc.iitb.fhg.de = PC.IITB.FHG.DE\n  .pc.iitb.fhg.de = PC.IITB.FHG.DE\n\n\n", 4096) = 338
read(13, "", 4096)                      = 0
close(13)                               = 0
open("/dev/urandom", O_RDONLY)          = 13
read(13, "Lp\354\0\375\202\232:c\360F\262\2\272\226\212\261\311|1b\241\322\250\365\257M\260\302O\276\316\24B)A\265e\342\235\234\254\32\246\311\2.n[\315\205\332[\354\244M\225\271n\317*l\207x", 64) = 64
close(13)                               = 0
open("/etc/krb5.conf", O_RDONLY)        = 13
read(13, "[libdefaults]\n  default_realm = PC.IITB.FHG.DE\n  default_tkt_enctypes = arcfour-hmac-md5\n  default_tgs_enctypes = arcfour-hmac-md5\n\t\n  forwardable = true\n\n[realms]\n  PC.IITB.FHG.DE = {\n    kdc = pcdc1.pc.iitb.fhg.de\n    kdc = pcdc2.pc.iitb.fhg.de\n  }\n\n[domain_realm]\n  pc.iitb.fhg.de = PC.IITB.FHG.DE\n  .pc.iitb.fhg.de = PC.IITB.FHG.DE\n\n\n", 4096) = 338
read(13, "", 4096)                      = 0
close(13)                               = 0
open("/dev/urandom", O_RDONLY)          = 13
read(13, "Y\375\23\321>\315pb(\2427f\350|\367\311\377WEg\v\1\272Ff`\224\364\273?9\370\245\202\271\213\216\311\240\27 \10;\375+U>\273\317\307u0[\303[YL\237\331\236\262\27\2031", 64) = 64
close(13)                               = 0
open("/etc/krb5.conf", O_RDONLY)        = 13
read(13, "[libdefaults]\n  default_realm = PC.IITB.FHG.DE\n  default_tkt_enctypes = arcfour-hmac-md5\n  default_tgs_enctypes = arcfour-hmac-md5\n\t\n  forwardable = true\n\n[realms]\n  PC.IITB.FHG.DE = {\n    kdc = pcdc1.pc.iitb.fhg.de\n    kdc = pcdc2.pc.iitb.fhg.de\n  }\n\n[domain_realm]\n  pc.iitb.fhg.de = PC.IITB.FHG.DE\n  .pc.iitb.fhg.de = PC.IITB.FHG.DE\n\n\n", 4096) = 338
read(13, "", 4096)                      = 0
close(13)                               = 0
open("/dev/urandom", O_RDONLY)          = 13
read(13, "\312F\336\211n_s\340\311\270\376=\212w\347\f\237\17E%\306\236t\5\2558\341\252c\250\366\4\356T\323\353\0\242A\24g\34\n\244ct<\17E\336\303\2328\341\301\274\3612v-X5\324)", 64) = 64
close(13)                               = 0
open("/etc/krb5.conf", O_RDONLY)        = 13
read(13, "[libdefaults]\n  default_realm = PC.IITB.FHG.DE\n  default_tkt_enctypes = arcfour-hmac-md5\n  default_tgs_enctypes = arcfour-hmac-md5\n\t\n  forwardable = true\n\n[realms]\n  PC.IITB.FHG.DE = {\n    kdc = pcdc1.pc.iitb.fhg.de\n    kdc = pcdc2.pc.iitb.fhg.de\n  }\n\n[domain_realm]\n  pc.iitb.fhg.de = PC.IITB.FHG.DE\n  .pc.iitb.fhg.de = PC.IITB.FHG.DE\n\n\n", 4096) = 338
read(13, "", 4096)                      = 0
close(13)                               = 0
open("/dev/urandom", O_RDONLY)          = 13
read(13, "\357OB\240\217 }_\304gQ\363Al\245\36Y\345\317\244iBu\n\271\266\1zj\350rfV\373\244\261\373jz\225[\226\367\1\340\27\331\352\213U\337\370\17ZWus\234!\337\272\372\341O", 64) = 64
close(13)                               = 0
open("/etc/krb5.conf", O_RDONLY)        = 13
read(13, "[libdefaults]\n  default_realm = PC.IITB.FHG.DE\n  default_tkt_enctypes = arcfour-hmac-md5\n  default_tgs_enctypes = arcfour-hmac-md5\n\t\n  forwardable = true\n\n[realms]\n  PC.IITB.FHG.DE = {\n    kdc = pcdc1.pc.iitb.fhg.de\n    kdc = pcdc2.pc.iitb.fhg.de\n  }\n\n[domain_realm]\n  pc.iitb.fhg.de = PC.IITB.FHG.DE\n  .pc.iitb.fhg.de = PC.IITB.FHG.DE\n\n\n", 4096) = 338
read(13, "", 4096)                      = 0
close(13)                               = 0
open("/dev/urandom", O_RDONLY)          = 13
read(13, "\224\341\263\4\305\264\266\327T\20\243g\343Q\32t\3CPA/_Jt\352O\205\376\225\330\365^]\214\242\214\212\200?s\315>\312W\7\373\0\343)b\244\230\250v\6\24265\324\317\21\26\346\212", 64) = 64
close(13)                               = 0
open("/dev/urandom", O_RDONLY)          = 13
read(13, "\304\t\10Xm\320\304\273\371\327\333@\307\332\362\251j\246\356\347P\256\3238H\16\337\226\342\361\317\5\34\241\314W\205\333e\33@)n%z\2178\314\215\0374\204\0070Qd\v\374,\213z\360\212\376", 64) = 64
close(13)                               = 0
open("/etc/krb5.keytab", O_RDONLY)      = 13
read(13, "\5\2\0\0\0D\0\2\0\16PC.IITB.FHG.DE\0\4host\0\25vnfsrv.pc.iitb.fhg.de\0\0\0\1Q\260nq\2\0\1\0\10\346\2177^\v\302\255\250\0\0\0D\0\2\0\16PC.IITB.FHG.DE\0\4host\0\25vnfsrv.pc.iitb.fhg.de\0\0\0\1Q\260nq\2\0\3\0\10\346\2177^\v\302\255\250\0\0\0L\0\2\0\16PC.IITB.FHG.DE\0\4host\0\25vnfsrv.pc.iitb.fhg.de\0\0\0\1Q\260nq\2\0\27\0\20\200\205\364\330\203\321\331]\36\16\0105'\242\377C\0\0\0005\0\2\0\16PC.IITB.FHG.DE\0\4host\0\6VNFSRV\0\0\0\1Q\260nq\2\0\1\0\10\346\2177^\v\302\255\250\0\0\0005\0\2\0\16PC.IITB.FHG.DE\0\4host\0\6VNFSRV\0\0\0\1Q\260nq\2\0\3\0\10\346\2177^\v\302\255\250\0\0\0=\0\2\0\16PC.IITB.FHG.DE\0\4host\0\6VNFSRV\0\0\0\1Q\260nq\2\0\27\0\20\200\205\364\330\203\321\331]\36\16\0105'\242\377C\0\0\0000\0\1\0\16PC.IITB.FHG.DE\0\7VNFSRV$\0\0\0\1Q\260nq\2\0\1\0\10\346\2177^\v\302\255\250\0\0\0000\0\1\0\16PC.IITB.FHG.DE\0\7VNFSRV$\0\0\0\1Q\260nq\2\0\3\0\10\346\2177^\v\302\255\250\0\0\0008\0\1\0\16PC.IITB.FHG.DE\0\7VNFSRV$\0\0\0\1Q\260nq\2\0\27\0\20\200\205\364\330\203\321\331]\36\16\0105'\242\377C\0\0\0C\0\2\0\16PC.IITB.FHG.DE\0\3nfs\0\25vnfsrv.pc.iitb.fhg.de\0\0\0\1Q\260n\231\2\0\1\0\10\346\2177^\v\302\255\250\0\0\0C\0\2\0\16PC.IITB.FHG.DE\0\3nfs\0\25vnfsrv.pc.iitb.fhg.de\0\0\0\1Q\260n\231\2\0\3\0\10\346\2177^\v\302\255\250\0\0\0K\0\2\0\16PC.IITB.FHG.DE\0\3nfs\0\25vnfsrv.pc.iitb.fhg.de\0\0\0\1Q\260n\231\2\0\27\0\20\200\205\364\330\203\321\331]\36\16\0105'\242\377C\0\0\0004\0\2\0\16PC.IITB.FHG.DE\0\3nfs\0\6VNFSRV\0\0\0\1Q\260n\231\2\0\1\0\10\346\2177^\v\302\255\250\0\0\0004\0\2\0\16PC.IITB.FHG.DE\0\3nfs\0\6VNFSRV\0\0\0\1Q\260n\232\2\0\3\0\10\346\2177^\v\302\255\250\0\0\0<\0\2\0\16PC.IITB.FHG.DE\0\3nfs\0\6VNFSRV\0\0\0\1Q\260n\232\2\0\27\0\20\200\205\364\330\203\321\331]\36\16\0105'\242\377C", 8192) = 966
close(13)                               = 0
write(3, "\1\0\0\0\0k\0\0\0HASH:1112FD81D4B188BCA77FFDC69D0C32FF 21:VENUS$@PC.IITB.FHG.DE 40:nfs/vnfsrv.pc.iitb.fhg.de@PC.IITB.FHG.DE\0\214#\1\0\274m\332Q\26\0\0\0VENUS$@PC.IITB.FHG.DE\0)\0\0\0nfs/vnfsrv.pc.iitb.fhg.de@PC.IITB.FHG.DE\0\214#\1\0\274m\332Q", 203) = 203
open("/usr/lib/x86_64-linux-gnu/krb5/plugins/authdata", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/usr/lib/x86_64-linux-gnu/krb5/plugins/authdata", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/dev/urandom", O_RDONLY)          = 13
read(13, "w\253\244\312k\341q\342\251I:\33b34\322\210\32\213\246\225@\25\204\25?\t\344\321\0038\322$:|s\177\30$\232\21B\203\23\331\320\327\310\34W\24x\2520\2\355~\251_\376x\304\262\36", 64) = 64
close(13)                               = 0
open("/dev/urandom", O_RDONLY)          = 13
read(13, "*xH\\0b|J\265\276\233o\253\320\352\212\364;\263\214\376#\n\16\254\363Qp\371\342.\370>\2348W\325\276xF\345q\21\334v\255\244\232d:\237\366~\313\360h\210A\4s\314,\365\201", 64) = 64
close(13)                               = 0
open("/dev/urandom", O_RDONLY)          = 13
read(13, "!\237\364Z\310\305\221)\201[\313\253\277R\276P1\273\304\23\31\222T\266\2635(\2775\213\312\342\300M\272_\21\211x\301_HD\242\314bW\315A\200\25VQ\2113\340\233\n\5\3062\335T_", 64) = 64
close(13)                               = 0
open("/dev/urandom", O_RDONLY)          = 13
read(13, "\333-)\377\31^\347_\311\307\300\257\254\211\221\361}\251\371 \346\253\363J\267\33-\24\316\331=\250\233S\277\326h\0229l\355\242-\310\3630\v\4]\315\22\222\2454z\312\32\\\r\3t\312\2\f", 64) = 64
close(13)                               = 0
write(2, "rpc.svcgssd: ", 13)           = 13
write(2, "ERROR: GSS-API: error in gss_free_lucid_sec_context(): GSS_S_NO_CONTEXT (No context has been established) - Unknown error\n", 122) = 122
write(2, "\n", 1)                       = 1
write(2, "rpc.svcgssd: ", 13)           = 13
write(2, "WARN: failed to free lucid sec context\n", 39) = 39
write(2, "\n", 1)                       = 1
open("/proc/net/rpc/auth.rpcsec.context/channel", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 13
write(13, "\\x02000000 1373304166 -1 -1 0 krb5 \\x0600000066f5da51fe3cc615000000001200000078e144e1ed0b6788fa4c4482792932f76ebdf8af531067a24badeb53eba8d9bd \n", 143) = 143
close(13)                               = 0
open("/proc/net/rpc/auth.rpcsec.init/channel", O_WRONLY) = 13
write(13, "\\x \\x6082051f06092a864886f71201020201006e82050e3082050aa003020105a10302010ea20703050020000000a382040761820403308203ffa003020105a1101b0e50432e494954422e4648472e4445a2273025a003020103a11e301c1b036e66731b15766e667372762e70632e696974622e6668672e6465a38203bb308203b7a003020117a103020102a28203a9048203a5c0be536630d9eddaa21dcaa5c68959ddcb5da7ea2ea2dbf2e641bd866fc5c5d1c0c21e261366d40775d8e8cec0c403f73255555ff92a20b373a778819bb1fa891e9ccfd6622a683e5a3c466e731f5b884ed79ff25f44bbcf20b493753de50b0cb7b2dc25266427bf658db58a51daba1339897c4d6709bb8cf220384faa7203bf9a92cfa2391d10eb3cf58a31aed3921b23cf1e49e0b91b615b51fea8d2c10307eec97ae4c8b5a0305105f1e5532f5f38dfc3ba6bd1f9667a5abd83fda0efaea4b532a5962c01a46fe8306072d3bc6d62062b10e73c0efe03316b3f9b8181b67d87fbc62cb371755b888e941cbd105d47e92b69439a87c36e438e5a0ee64f832efb8b46f0c4c65222721a06bb29456a744df5fe56f6b91c2d947e7f41b15ed46dc73d19c5a11974cc50d5faa53ffd27014aba6b69adf28d8e9efa4630c595477246f2d1be456b9fa446b55ef4182dd5f0e1bd0f418ce1bca9181759e68424ca9c55c9a17e62ab19d84be413615f650bb0ba10d1fc938b81cda4623f113dbf74cab8254f5e7e3e6a16a86efb140c1abedc65176795067bf52bb3d273b0833859878e9b51894689d24080929b57763a507e7435effb351f6ac4f2658fd92ab51c6b2b0b234ff413cbaf05e150ad19b220a8d63244bc6ebf601bc3ed3e5f5665c35b4bcb5696f8b5bcaff5ea829cd1cadb8ad1de0010e36fb8bf10305b396614d7bc483e2318391278c3c104a8d49c52b0c64091f4771b9aa05a710273c0a9fc074577b0fd7ad076fb14ba7760c8a95ffa49e52c33520420304d90d1dbe1c7d4960ef19fec74dd1af6ec7da006ac0dbf7aea78520c7ee0a378e4ac1cbe84b40b6d7e6aa43a2dcdc059acd71292590993b403f10f54c9f003996fe58b135dd2a3c1e23c6c39414a7a06de39e1625e209c1f23004a413577bd4aa9456107158a2e44490aad8a6db67a0672eb7ae51e1b01c0bf4ce63cc5e4fdfeced6eb753f76f0940366c4c577ac6794030d8850f12c2052ee20fdbea7b0bcc10392d8bb96a114088787dc1183bfa9bb86f0cb5872a0087b9234015ccd9bc33ea7f415e3d4669197cc036e9cbc4849eb08e4e91e1d7c7515851c3e473ac1e92076f2b4ad549a99ca0c4b50431bc5f6e6693f3c5697f96168f91221e991f08b0820d795d24b9caf587d70916c0e1b90fd51c6e615631039c1e748a841e349eb6d5272216bd2eec53ad09826e12a701886c46656627a1379f48e8b79592c2fd424b2b29414dec3d11093f618af48fa912105fd163c9b76df3a6f5a2608629b27e25b7476cc80cff8d8fdd466d5ff7fa481e93081e6a003020117a281de0481db5d54a6ca5f33bcd5f204274dc255945eb2ee22e4657eeae001d53df27a8bbdaf247e598050e5e9957588aad75de62ec64aaf586aed86cc173ce41ff9a590d5ac5588166cb2904470731606da0e29a16174ef44dab40383ba76f849994653073b94c1b50463cfb9e24068e07d93effa65ebd95ebb33a812a412fc6367d71dbcba30f7e95365fd787c00e23c98f4fb10a8bfc292426bad83305778d58e43758cd4b5005672c494812521d29c355a03e5e046927b5aff8ff12d7d9f07e70a4d3e8b7859fe9d8ae56595a3e89cb289f4f720ec28fbe179a160afcb2b4b 1373269499 0 0 \\x02000000 \\x60819506092a864886f71201020202006f8185308182a003020105a10302010fa2763074a003020117a26d046b60bea13089ec0fa390c0936703f608842069597bfeff00469419cdb6aab8b3e02ad15174806fe84012f2dc84cbd5304bbde04675c75eae472674609cf97a75a7a96306123387c70be8993a170c31c5f6ef6850d4905121272725e6ad523a10fd2f7ed0f5c76c4535f41421 \n", 2970) = 2970
close(13)                               = 0
open("/etc/krb5.conf", O_RDONLY)        = 13
read(13, "[libdefaults]\n  default_realm = PC.IITB.FHG.DE\n  default_tkt_enctypes = arcfour-hmac-md5\n  default_tgs_enctypes = arcfour-hmac-md5\n\t\n  forwardable = true\n\n[realms]\n  PC.IITB.FHG.DE = {\n    kdc = pcdc1.pc.iitb.fhg.de\n    kdc = pcdc2.pc.iitb.fhg.de\n  }\n\n[domain_realm]\n  pc.iitb.fhg.de = PC.IITB.FHG.DE\n  .pc.iitb.fhg.de = PC.IITB.FHG.DE\n\n\n", 4096) = 338
read(13, "", 4096)                      = 0
close(13)                               = 0
open("/dev/urandom", O_RDONLY)          = 13
read(13, "\347Y\373\4\247\233\30m\346\257\237\222\313\3318\35fJ\241\37I\350\374\366\276\333\2214\375\371l(\344\6\274\362\221\205U\213}\266\325\n;[\267\255X\360ogO6\24\247\30\242\333\322$\31;!", 64) = 64
close(13)                               = 0

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-07-08  8:08                       ` Sven Geggus
@ 2013-07-08 13:35                         ` J. Bruce Fields
  2013-07-08 15:54                           ` Sven Geggus
  2013-07-09 16:57                         ` J. Bruce Fields
  1 sibling, 1 reply; 19+ messages in thread
From: J. Bruce Fields @ 2013-07-08 13:35 UTC (permalink / raw)
  To: Sven Geggus; +Cc: linux-nfs

On Mon, Jul 08, 2013 at 10:08:05AM +0200, Sven Geggus wrote:
> Monday Monday, so good to me :)
> 
> On Friday, 5. Jul 2013 22:34, J. Bruce Fields wrote:
> 
> > Hm, might be interesting to see what mountd's doing too:
> > 
> > 	strace -p $(pidof rpc.svcgssd) -s4096 -e trace=open,close,read,write

Argh--I said "mountd", then I forgot to replace the rpc.svcgssd in the
cut-n-pasted commandline; should have been:

	strace -p $(pidof rpc.mountd) -s4096 -e trace=open,close,read,write

> 
> OK, output attached. While reading this output I recogniced, that I
> overlooked the following error-messages from rpc.svcgssd and rpc.mountd:
> 
> Jul  8 09:53:38 vnfsrv rpc.svcgssd[2914]: ERROR: GSS-API: error in gss_free_lucid_sec_context(): GSS_S_NO_CONTEXT (No context has been established) - Unknown error
> Jul  8 09:53:38 vnfsrv rpc.svcgssd[2914]: WARN: failed to free lucid sec context
> Jul  8 09:53:38 vnfsrv rpc.mountd[2927]: qword_eol: fprintf failed: errno 22 (Invalid argument)
> 
> > Hm, maybe this gid upcall?  Does the following (probably not completely
> > right) help?
> 
> Yes, we are getting there. While the strange messages from rpc.svcgssd
> persist, the mount now seems to work again after your patch has been
> applied.

Anyway.  OK, that more or less solves the mystery, though the strace
output might still be interesting.

--b.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-07-08 13:35                         ` J. Bruce Fields
@ 2013-07-08 15:54                           ` Sven Geggus
  2013-07-08 19:41                             ` J. Bruce Fields
  0 siblings, 1 reply; 19+ messages in thread
From: Sven Geggus @ 2013-07-08 15:54 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: linux-nfs

J. Bruce Fields schrieb am Montag, den 08. Juli um 15:35 Uhr:

> Anyway.  OK, that more or less solves the mystery, though the strace
> output might still be interesting.

Not very enlightening, at least for me, but here we go:

read(3, "nfsd 10.1.7.30\n", 2048)       = 15
close(13)                               = 0
open("/var/lib/nfs/etab", O_RDONLY)     = 13
close(13)                               = 0
close(13)                               = 0
write(3, "nfsd 10.1.7.30 1373300439 * \n", 29) = 29
read(4, "4294967295\n", 2048)           = 11
close(14)                               = 0
close(13)                               = 0
read(13, "\2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\377\377\377\377\377\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0", 36) = 36
close(13)                               = 0
write(4, "4294967295 1373300439 0 \n", 25) = -1 EINVAL (Invalid argument)

Regards

Sven

-- 
The source code is not comprehensible
                 (found in bug section of man 8 telnetd on Redhat Linux)

/me is giggls@ircnet, http://sven.gegg.us/ on the Web

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-07-08 15:54                           ` Sven Geggus
@ 2013-07-08 19:41                             ` J. Bruce Fields
  2013-07-09 16:03                               ` Sven Geggus
  0 siblings, 1 reply; 19+ messages in thread
From: J. Bruce Fields @ 2013-07-08 19:41 UTC (permalink / raw)
  To: Sven Geggus; +Cc: linux-nfs

On Mon, Jul 08, 2013 at 05:54:06PM +0200, Sven Geggus wrote:
> J. Bruce Fields schrieb am Montag, den 08. Juli um 15:35 Uhr:
> 
> > Anyway.  OK, that more or less solves the mystery, though the strace
> > output might still be interesting.
> 
> Not very enlightening, at least for me, but here we go:
> 
> read(3, "nfsd 10.1.7.30\n", 2048)       = 15
> close(13)                               = 0
> open("/var/lib/nfs/etab", O_RDONLY)     = 13
> close(13)                               = 0
> close(13)                               = 0
> write(3, "nfsd 10.1.7.30 1373300439 * \n", 29) = 29
> read(4, "4294967295\n", 2048)           = 11
> close(14)                               = 0
> close(13)                               = 0
> read(13, "\2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\377\377\377\377\377\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0", 36) = 36
> close(13)                               = 0
> write(4, "4294967295 1373300439 0 \n", 25) = -1 EINVAL (Invalid argument)

Thanks!  Yep, 4294967295 is (u32)(-1), so it's this uid_valid check in
unix_gid_parse that we're hitting.

I'm not even sure whether it makes sense to be doing this upcall to ask
for supplemental groups in the -1 case.  Going with the minimal fix for
now.

Oh, and oops, it should go to stable to--adding that Cc: to my local
copy of the patch.

--b.

commit 8efb88340e29293e05f6b498b60596884c05a8a8
Author: J. Bruce Fields <bfields@redhat.com>
Date:   Mon Jul 8 13:44:45 2013 -0400

    svcrpc: fix failures to handle -1 uid's
    
    As of f025adf191924e3a75ce80e130afcd2485b53bb8 "sunrpc: Properly decode
    kuids and kgids in RPC_AUTH_UNIX credentials" any rpc containing a -1
    (0xffff) uid or gid would fail with a badcred error.
    
    Commit afe3c3fd5392b2f0066930abc5dbd3f4b14a0f13 "svcrpc: fix failures to
    handle -1 uid's and gid's" fixed part of the problem, but overlooked the
    gid upcall--the kernel can request supplementary gid's for the -1 uid,
    but mountd's attempt write a response will get -EINVAL.
    
    Symptoms were nfsd failing to reply to the first attempt to use a newly
    negotiated krb5 context.
    
    Reported-by: Sven Geggus <lists@fuchsschwanzdomain.de>
    Tested-by: Sven Geggus <lists@fuchsschwanzdomain.de>
    Signed-off-by: J. Bruce Fields <bfields@redhat.com>

diff --git a/net/sunrpc/svcauth_unix.c b/net/sunrpc/svcauth_unix.c
index a98853d..621ca7b 100644
--- a/net/sunrpc/svcauth_unix.c
+++ b/net/sunrpc/svcauth_unix.c
@@ -493,8 +493,6 @@ static int unix_gid_parse(struct cache_detail *cd,
 	if (rv)
 		return -EINVAL;
 	uid = make_kuid(&init_user_ns, id);
-	if (!uid_valid(uid))
-		return -EINVAL;
 	ug.uid = uid;
 
 	expiry = get_expiry(&mesg);

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-07-08 19:41                             ` J. Bruce Fields
@ 2013-07-09 16:03                               ` Sven Geggus
  0 siblings, 0 replies; 19+ messages in thread
From: Sven Geggus @ 2013-07-09 16:03 UTC (permalink / raw)
  To: J. Bruce Fields

J. Bruce Fields schrieb am Montag, den 08. Juli um 21:41 Uhr:

> I'm not even sure whether it makes sense to be doing this upcall to ask
> for supplemental groups in the -1 case.  Going with the minimal fix for
> now.

Confirmed to work fine here, but strange messages from userland rpc.svcgssd persist:

Jul  9 17:59:27 vnfsrv rpc.svcgssd[2919]: ERROR: GSS-API: error in gss_free_lucid_sec_context(): GSS_S_NO_CONTEXT (No context has been established) - Unknown error
Jul  9 17:59:27 vnfsrv rpc.svcgssd[2919]: WARN: failed to free lucid sec context
Jul  9 17:59:27 vnfsrv rpc.svcgssd[2919]: ERROR: GSS-API: error in gss_free_lucid_sec_context(): GSS_S_NO_CONTEXT (No context has been established) - Unknown error
Jul  9 17:59:27 vnfsrv rpc.svcgssd[2919]: WARN: failed to free lucid sec context
Jul  9 17:59:27 vnfsrv rpc.svcgssd[2919]: ERROR: GSS-API: error in gss_free_lucid_sec_context(): GSS_S_NO_CONTEXT (No context has been established) - Unknown error
Jul  9 17:59:27 vnfsrv rpc.svcgssd[2919]: WARN: failed to free lucid sec context
Jul  9 17:59:27 vnfsrv rpc.svcgssd[2919]: ERROR: GSS-API: error in gss_free_lucid_sec_context(): GSS_S_NO_CONTEXT (No context has been established) - Unknown error
Jul  9 17:59:27 vnfsrv rpc.svcgssd[2919]: WARN: failed to free lucid sec context
Jul  9 17:59:27 vnfsrv rpc.svcgssd[2919]: ERROR: GSS-API: error in gss_free_lucid_sec_context(): GSS_S_NO_CONTEXT (No context has been established) - Unknown error
Jul  9 17:59:27 vnfsrv rpc.svcgssd[2919]: WARN: failed to free lucid sec context
Jul  9 17:59:27 vnfsrv rpc.svcgssd[2919]: ERROR: GSS-API: error in gss_free_lucid_sec_context(): GSS_S_NO_CONTEXT (No context has been established) - Unknown error
Jul  9 17:59:27 vnfsrv rpc.svcgssd[2919]: WARN: failed to free lucid sec context


> commit 8efb88340e29293e05f6b498b60596884c05a8a8

OK, is this supposed to get into 3.9.x and 3.10.x respectively?

Regards

Sven

Microsoft ist offenbar die einzige Firma, die in der Lage ist, ein mit
Office nicht kompatibles Bürosoftwarepaket einzuführen.
                            (Florian Weimer in de.alt.sysadmin.recovery)
/me is giggls@ircnet, http://sven.gegg.us/ on the Web

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-07-08  8:08                       ` Sven Geggus
  2013-07-08 13:35                         ` J. Bruce Fields
@ 2013-07-09 16:57                         ` J. Bruce Fields
  2013-07-09 17:23                           ` Chuck Lever
  2013-07-10  7:55                           ` Sven Geggus
  1 sibling, 2 replies; 19+ messages in thread
From: J. Bruce Fields @ 2013-07-09 16:57 UTC (permalink / raw)
  To: Sven Geggus; +Cc: linux-nfs, Chuck Lever, NeilBrown

On Mon, Jul 08, 2013 at 10:08:05AM +0200, Sven Geggus wrote:
> OK, output attached. While reading this output I recogniced, that I
> overlooked the following error-messages from rpc.svcgssd and rpc.mountd:

I don't think they're causing you any practical problem, but there was
some discussion of how to destroy contexts in gssd recently that I
didn't follow closely enough.  I seem to recall Neil and Chuck being the
ones chasing down those bugs?

> Jul  8 09:53:38 vnfsrv rpc.svcgssd[2914]: ERROR: GSS-API: error in gss_free_lucid_sec_context(): GSS_S_NO_CONTEXT (No context has been established) - Unknown error
> Jul  8 09:53:38 vnfsrv rpc.svcgssd[2914]: WARN: failed to free lucid sec context

Does rpc.svcgssd still need fixing?  (Sven, could you remind us what
nfs-utils version this is from?)

--b.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-07-09 16:57                         ` J. Bruce Fields
@ 2013-07-09 17:23                           ` Chuck Lever
  2013-07-10  7:55                           ` Sven Geggus
  1 sibling, 0 replies; 19+ messages in thread
From: Chuck Lever @ 2013-07-09 17:23 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: Sven Geggus, linux-nfs, NeilBrown


On Jul 9, 2013, at 12:57 PM, "J. Bruce Fields" <bfields@fieldses.org> wrote:

> On Mon, Jul 08, 2013 at 10:08:05AM +0200, Sven Geggus wrote:
>> OK, output attached. While reading this output I recogniced, that I
>> overlooked the following error-messages from rpc.svcgssd and rpc.mountd:
> 
> I don't think they're causing you any practical problem, but there was
> some discussion of how to destroy contexts in gssd recently that I
> didn't follow closely enough.  I seem to recall Neil and Chuck being the
> ones chasing down those bugs?

I haven't touched rpc.svcgssd.  I thought this code was being replaced by the gss_proxy-based implementation.

>> Jul  8 09:53:38 vnfsrv rpc.svcgssd[2914]: ERROR: GSS-API: error in gss_free_lucid_sec_context(): GSS_S_NO_CONTEXT (No context has been established) - Unknown error
>> Jul  8 09:53:38 vnfsrv rpc.svcgssd[2914]: WARN: failed to free lucid sec context
> 
> Does rpc.svcgssd still need fixing?  (Sven, could you remind us what
> nfs-utils version this is from?)
> 
> --b.

-- 
Chuck Lever
chuck[dot]lever[at]oracle[dot]com





^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Kerberized NFS-Server Problem still present in 3.10.0-rc2
  2013-07-09 16:57                         ` J. Bruce Fields
  2013-07-09 17:23                           ` Chuck Lever
@ 2013-07-10  7:55                           ` Sven Geggus
  1 sibling, 0 replies; 19+ messages in thread
From: Sven Geggus @ 2013-07-10  7:55 UTC (permalink / raw)
  To: linux-nfs

J. Bruce Fields <bfields@fieldses.org> wrote:

>> Jul  8 09:53:38 vnfsrv rpc.svcgssd[2914]: ERROR: GSS-API: error in gss_free_lucid_sec_context(): GSS_S_NO_CONTEXT (No context has been established) - Unknown error
>> Jul  8 09:53:38 vnfsrv rpc.svcgssd[2914]: WARN: failed to free lucid sec context
> 
> Does rpc.svcgssd still need fixing?  (Sven, could you remind us what
> nfs-utils version this is from?)

1.2.8

Regards

Sven

-- 
"Thinking of using NT for your critical apps?
                                  Isn't there enough suffering in the world?"
                   (Advertisement of Sun Microsystems in Wall Street Journal)
/me is giggls@ircnet, http://sven.gegg.us/ on the Web

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2013-07-10  7:55 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-19 13:47 Kerberized NFS-Server Problem still present in 3.10.0-rc2 Sven Geggus
2013-06-19 21:34 ` J. Bruce Fields
2013-06-20  8:03   ` Sven Geggus
2013-06-20 12:09     ` Sven Geggus
2013-06-20 14:49     ` J. Bruce Fields
2013-06-20 14:52       ` J. Bruce Fields
2013-06-21  8:32         ` Sven Geggus
2013-06-24 21:54           ` J. Bruce Fields
2013-06-25  9:46             ` Sven Geggus
2013-07-01 12:09               ` Sven Geggus
     [not found]               ` <20130701205234.GF19945@fieldses.org>
2013-07-02  8:05                 ` Sven Geggus
     [not found]                   ` <20130705181859.GA8288@fieldses.org>
     [not found]                     ` <20130705203435.GE8288@fieldses.org>
2013-07-08  8:08                       ` Sven Geggus
2013-07-08 13:35                         ` J. Bruce Fields
2013-07-08 15:54                           ` Sven Geggus
2013-07-08 19:41                             ` J. Bruce Fields
2013-07-09 16:03                               ` Sven Geggus
2013-07-09 16:57                         ` J. Bruce Fields
2013-07-09 17:23                           ` Chuck Lever
2013-07-10  7:55                           ` Sven Geggus

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.