All of lore.kernel.org
 help / color / mirror / Atom feed
From: Doug Nazar <nazard@nazar.ca>
To: "Kraus, Sebastian" <sebastian.kraus@tu-berlin.de>,
	"linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>
Cc: "J. Bruce Fields" <bfields@fieldses.org>
Subject: Re: Strange segmentation violations of rpc.gssd in Debian Buster
Date: Fri, 26 Jun 2020 13:23:54 -0400	[thread overview]
Message-ID: <c1c314fd-1855-cf04-3ec5-5f6eb35719a5@nazar.ca> (raw)
In-Reply-To: <e9de5046e7734e728e64b386314a5d2e@tu-berlin.de>

On 2020-06-26 08:31, Kraus, Sebastian wrote:
> Hi Doug, Dear Bruce,
> FYI, I filed a bug with nfs-common package in Debian Buster:
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=963746
>
> Hope this helps to track down the problem further and shed some light on the mysterious segfaults of rpc.gssd.

Ok, I think I see what's going on. The struct clnt_info is getting freed 
out from under the upcall thread. In this case it immediately got reused 
for another client which zeroed the struct and was in the process of 
looking up the info for it's client, hence the protocol & server fields 
were null in the upcall thread.

Explains why I haven't been able to recreate it. Thanks for the stack 
trace Sebastian.

Bruce, I can't see any locking/reference counting around struct 
clnt_info. It just gets destroyed when receiving the inotify. Or should 
it be deep copied when starting an upcall? Am I missing something?

Doug

Jun 25 11:46:08 server rpc.gssd[6356]: inotify event for topdir (nfsd4_cb) - ev->wd (5) ev->name (clnt50e) ev->mask (0x40000100)
Jun 25 11:46:08 server rpc.gssd[6356]: handle_gssd_upcall: 'mech=krb5 uid=0 target=host@client.domain.tu-berlin.de service=nfs enctypes=18,17,16,23,3,1,2 ' (nfsd4_cb/clnt50e)
Jun 25 11:46:08 server rpc.gssd[6356]: krb5_use_machine_creds: uid 0 tgtname host@client.domain.tu-berlin.de
Jun 25 11:46:08 server rpc.gssd[6356]: inotify event for clntdir (nfsd4_cb/clnt50e) - ev->wd (75) ev->name (krb5) ev->mask (0x00000200)
Jun 25 11:46:08 server rpc.gssd[6356]: inotify event for clntdir (nfsd4_cb/clnt50e) - ev->wd (75) ev->name (gssd) ev->mask (0x00000200)
Jun 25 11:46:08 server rpc.gssd[6356]: inotify event for clntdir (nfsd4_cb/clnt50e) - ev->wd (75) ev->name (info) ev->mask (0x00000200)
Jun 25 11:46:08 server rpc.gssd[6356]: inotify event for clntdir (nfsd4_cb/clnt50e) - ev->wd (75) ev->name (<?>) ev->mask (0x00008000)
Jun 25 11:46:08 server rpc.gssd[6356]: inotify event for topdir (nfsd4_cb) - ev->wd (5) ev->name (clnt50f) ev->mask (0x40000100)
Jun 25 11:46:08 server rpc.gssd[6356]: Full hostname for '' is 'client.domain.tu-berlin.de'
Jun 25 11:46:08 server rpc.gssd[6356]: Full hostname for 'server.domain.tu-berlin.de' is 'server.domain.tu-berlin.de'
Jun 25 11:46:08 server rpc.gssd[6356]: Success getting keytab entry for 'nfs/server.domain.tu-berlin.de@TU-BERLIN.DE'
Jun 25 11:46:08 server rpc.gssd[6356]: INFO: Credentials in CC 'FILE:/tmp/krb5ccmachine_TU-BERLIN.DE' are good until 1593101766
Jun 25 11:46:08 server rpc.gssd[6356]: INFO: Credentials in CC 'FILE:/tmp/krb5ccmachine_TU-BERLIN.DE' are good until 1593101766
Jun 25 11:46:08 server rpc.gssd[6356]: creating (null) client for server (null)
Jun 25 11:46:08 all kernel: rpc.gssd[14174]: segfault at 0 ip 000056233fff038e sp 00007fb2eaeb9880 error 4 in rpc.gssd[56233ffed000+9000]


Thread 1 (Thread 0x7fb2eaeba700 (LWP 14174)):
#0  0x000056233fff038e in create_auth_rpc_client (clp=clp@entry=0x562341008fa0, tgtname=tgtname@entry=0x562341011c8f "host@client.domain.tu-berlin.de", clnt_return=clnt_return@entry=0x7fb2eaeb9de8, auth_return=auth_return@entry=0x7fb2eaeb9d50, uid=uid@entry=0, cred=cred@entry=0x0, authtype=0) at gssd_proc.c:352

Thread 2 (Thread 0x7fb2eb6d9740 (LWP 6356)):
#12 0x000056233ffef82c in gssd_read_service_info (clp=0x562341008fa0, dirfd=11) at gssd.c:326



  reply	other threads:[~2020-06-26 17:24 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-19 21:24 RPC Pipefs: Frequent parsing errors in client database Kraus, Sebastian
2020-06-19 22:04 ` J. Bruce Fields
2020-06-20 11:35   ` Kraus, Sebastian
2020-06-20 17:03     ` J. Bruce Fields
2020-06-20 21:08       ` Kraus, Sebastian
2020-06-22 22:36         ` J. Bruce Fields
2020-06-25 17:43           ` Strange segmentation violations of rpc.gssd in Debian Buster Kraus, Sebastian
2020-06-25 20:14             ` J. Bruce Fields
2020-06-25 21:44             ` Doug Nazar
2020-06-26 12:31               ` Kraus, Sebastian
2020-06-26 17:23                 ` Doug Nazar [this message]
2020-06-26 19:46                   ` J. Bruce Fields
2020-06-26 20:15                     ` Doug Nazar
2020-06-26 21:02                       ` J. Bruce Fields
2020-06-26 21:30                         ` [PATCH v2] " Doug Nazar
2020-06-26 21:44                           ` J. Bruce Fields
2020-06-29  5:39                           ` Kraus, Sebastian
2020-06-29 14:09                             ` Doug Nazar
2020-07-01  7:39                               ` Kraus, Sebastian
2020-07-01  8:13                                 ` [PATCH v2] " Peter Eriksson
2020-07-01 18:45                                 ` [PATCH v2] " Doug Nazar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c1c314fd-1855-cf04-3ec5-5f6eb35719a5@nazar.ca \
    --to=nazard@nazar.ca \
    --cc=bfields@fieldses.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=sebastian.kraus@tu-berlin.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.