All of lore.kernel.org
 help / color / mirror / Atom feed
* autofs reverts to IPv4 for multi-homed IPv6 server ?
@ 2016-04-07 14:19 Christof Koehler
  2016-04-08  4:46 ` Ian Kent
  0 siblings, 1 reply; 49+ messages in thread
From: Christof Koehler @ 2016-04-07 14:19 UTC (permalink / raw)
  To: autofs

Hello everybody,

I am on ubuntu 14.04 with autofs 5.0.7 and I observe an (for me)
unexpected behaviour as detailed below. Apparently using autofs NFS4
mounts fall back to using IPv4 addresses although valid IPv6 addresses
are available under certain circumstances, while a plain mount works as
expected.

Setup:
------
Both, NFS server and client, are configured with an IPv4 address and an
IPv6 GUA and IPv6 ULA. For brevity I will shorten the IPv4 address to
192, the GUA to 2001 and the ULA to fd5f  below. I will only change the
DNS AAAA record in the following, the network configuration on
server/client or the A records never change. Server and client have
always working IPv4 and IPv6 GUA and ULA.

Test with mount:
----------------
Using a plain "mount  -t nfs4 server:/locals /mnt/disk1/" on the client
gives depending on the DNS entries for the server the expected
source/target selection:

Server DNS entry|	client address used to mount
2001		|	2001
fd5f		|	fd5f
2001+fd5f	|	fd5f

So in all cases RFC 6724/3484 is observed selecting the addresses.
Please note that the server has two AAAA records (multi-homed) in the
last test.

Test with autofs:
-----------------
A map lookup will yield "-fstype=nfs4,rw,intr,nosuid,soft,nodev server:/locals"
for the mount. Now I change again the servers AAAA records with the
following result:

Server DNS entry|	client address used to mount
2001		|	2001
fd5f		|	fd5f
2001+fd5f	|	192

For a multi-homed NFS4 server autofs apparently falls back to IPv4
although valid IPv6 options exist. As shown above just mounting without
autofs would stick to RFC 6724/3484 instead. I believe that autofs should 
also select fd5f ULAs in the multi-homed case.

Is this a known behaviour ? Do any workarounds exist ? I could not find
anything.

I tried to compile autofs 5.1.1 with --with-libtirpc because
of https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1101779 but
could not get the binary to work. I filed a bug report for the behaviour
described above https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1564380
but suspect that this is better suited for this list.

Best Regards

Christof

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-07 14:19 autofs reverts to IPv4 for multi-homed IPv6 server ? Christof Koehler
@ 2016-04-08  4:46 ` Ian Kent
  2016-04-08 10:10   ` Ian Kent
  2016-04-08 11:47   ` Christof Koehler
  0 siblings, 2 replies; 49+ messages in thread
From: Ian Kent @ 2016-04-08  4:46 UTC (permalink / raw)
  To: christof.koehler, autofs

On Thu, 2016-04-07 at 16:19 +0200, Christof Koehler wrote:
> Hello everybody,
> 
> I am on ubuntu 14.04 with autofs 5.0.7 and I observe an (for me)
> unexpected behaviour as detailed below. Apparently using autofs NFS4
> mounts fall back to using IPv4 addresses although valid IPv6 addresses
> are available under certain circumstances, while a plain mount works
> as
> expected.

Can you provide a full debug log.

It might be autofs interfering with the mount but mount.nfs(8) is a more
likely candidate.

> 
> Setup:
> ------
> Both, NFS server and client, are configured with an IPv4 address and
> an
> IPv6 GUA and IPv6 ULA. For brevity I will shorten the IPv4 address to
> 192, the GUA to 2001 and the ULA to fd5f  below. I will only change
> the
> DNS AAAA record in the following, the network configuration on
> server/client or the A records never change. Server and client have
> always working IPv4 and IPv6 GUA and ULA.
> 
> Test with mount:
> ----------------
> Using a plain "mount  -t nfs4 server:/locals /mnt/disk1/" on the
> client
> gives depending on the DNS entries for the server the expected
> source/target selection:
> 
> Server DNS entry|	client address used to mount
> 2001		|	2001
> fd5f		|	fd5f
> 2001+fd5f	|	fd5f
> 
> So in all cases RFC 6724/3484 is observed selecting the addresses.
> Please note that the server has two AAAA records (multi-homed) in the
> last test.
> 
> Test with autofs:
> -----------------
> A map lookup will yield "-fstype=nfs4,rw,intr,nosuid,soft,nodev
> server:/locals"
> for the mount. Now I change again the servers AAAA records with the
> following result:
> 
> Server DNS entry|	client address used to mount
> 2001		|	2001
> fd5f		|	fd5f
> 2001+fd5f	|	192
> 
> For a multi-homed NFS4 server autofs apparently falls back to IPv4
> although valid IPv6 options exist. As shown above just mounting
> without
> autofs would stick to RFC 6724/3484 instead. I believe that autofs
> should 
> also select fd5f ULAs in the multi-homed case.
> 
> Is this a known behaviour ? Do any workarounds exist ? I could not
> find
> anything.
> 
> I tried to compile autofs 5.1.1 with --with-libtirpc because
> of https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1101779 but
> could not get the binary to work. I filed a bug report for the
> behaviour
> described above 
> https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1564380
> but suspect that this is better suited for this list.
> 
> Best Regards
> 
> Christof
> 
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-08  4:46 ` Ian Kent
@ 2016-04-08 10:10   ` Ian Kent
  2016-04-08 10:14     ` Ian Kent
                       ` (2 more replies)
  2016-04-08 11:47   ` Christof Koehler
  1 sibling, 3 replies; 49+ messages in thread
From: Ian Kent @ 2016-04-08 10:10 UTC (permalink / raw)
  To: christof.koehler, autofs

On Fri, 2016-04-08 at 12:46 +0800, Ian Kent wrote:
> On Thu, 2016-04-07 at 16:19 +0200, Christof Koehler wrote:
> > Hello everybody,
> > 
> > I am on ubuntu 14.04 with autofs 5.0.7 and I observe an (for me)
> > unexpected behaviour as detailed below. Apparently using autofs NFS4
> > mounts fall back to using IPv4 addresses although valid IPv6
> > addresses
> > are available under certain circumstances, while a plain mount works
> > as
> > expected.
> 
> Can you provide a full debug log.
> 
> It might be autofs interfering with the mount but mount.nfs(8) is a
> more
> likely candidate.
> 
> > 
> > Setup:
> > ------
> > Both, NFS server and client, are configured with an IPv4 address and
> > an
> > IPv6 GUA and IPv6 ULA. For brevity I will shorten the IPv4 address
> > to
> > 192, the GUA to 2001 and the ULA to fd5f  below. I will only change
> > the
> > DNS AAAA record in the following, the network configuration on
> > server/client or the A records never change. Server and client have
> > always working IPv4 and IPv6 GUA and ULA.
> > 
> > Test with mount:
> > ----------------
> > Using a plain "mount  -t nfs4 server:/locals /mnt/disk1/" on the
> > client
> > gives depending on the DNS entries for the server the expected
> > source/target selection:
> > 
> > Server DNS entry|	client address used to mount
> > 2001		|	2001
> > fd5f		|	fd5f
> > 2001+fd5f	|	fd5f
> > 
> > So in all cases RFC 6724/3484 is observed selecting the addresses.
> > Please note that the server has two AAAA records (multi-homed) in
> > the
> > last test.
> > 
> > Test with autofs:
> > -----------------
> > A map lookup will yield "-fstype=nfs4,rw,intr,nosuid,soft,nodev
> > server:/locals"
> > for the mount. Now I change again the servers AAAA records with the
> > following result:
> > 
> > Server DNS entry|	client address used to mount
> > 2001		|	2001
> > fd5f		|	fd5f
> > 2001+fd5f	|	192
> > 
> > For a multi-homed NFS4 server autofs apparently falls back to IPv4
> > although valid IPv6 options exist. As shown above just mounting
> > without
> > autofs would stick to RFC 6724/3484 instead. I believe that autofs
> > should 
> > also select fd5f ULAs in the multi-homed case.
> > 
> > Is this a known behaviour ? Do any workarounds exist ? I could not
> > find
> > anything.
> > 
> > I tried to compile autofs 5.1.1 with --with-libtirpc because
> > of https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1101779 but
> > could not get the binary to work. I filed a bug report for the
> > behaviour
> > described above 
> > https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1564380
> > but suspect that this is better suited for this list.

I've been thinking about this and I have a couple of thoughts.

As far a IPv6 goes using glibc RPC is, I think, not going to work!

That's the first thing that needs to be sorted out.

I've been using libtirpc in Fedora and RHEL builds for nearly 10 years
so I don't think the library problem is with autofs.

This is an indication someone is doing something a little dumb:

automount[20444]: open_mount:247: parse(sun): cannot open mount module
nfs (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined symbol:
clnt_dg_create)
automount[20444]: lookup(file): failed to open parse context

clnt_dg_create() is an entry point in libtirpc and I'm fairly sure it
has been present in libtirpc from the beginning but maybe not and there
is a very old libtirpc in use, don't know.

Anyway, it looks more like either the matching shared library from the
build is not present on the system or the package was built with a
libtirpc devel package but the runtime library isn't present (and
obviously is not a dependency of the package).

I'm not familiar with the Debian build system so I'm probably not going
to be much help on that.

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-08 10:10   ` Ian Kent
@ 2016-04-08 10:14     ` Ian Kent
  2016-04-08 12:25     ` Christof Koehler
  2016-04-11  2:42     ` Ian Kent
  2 siblings, 0 replies; 49+ messages in thread
From: Ian Kent @ 2016-04-08 10:14 UTC (permalink / raw)
  To: christof.koehler, autofs

On Fri, 2016-04-08 at 18:10 +0800, Ian Kent wrote:
> I've been using libtirpc in Fedora and RHEL builds for nearly 10 years
> so I don't think the library problem is with autofs.

Umm .. that might not be true.

Amend that to read I've been using libtirpc with autofs since not long
after it was available, feels like it could be 10 years though, ;)

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-08  4:46 ` Ian Kent
  2016-04-08 10:10   ` Ian Kent
@ 2016-04-08 11:47   ` Christof Koehler
  1 sibling, 0 replies; 49+ messages in thread
From: Christof Koehler @ 2016-04-08 11:47 UTC (permalink / raw)
  To: autofs

[-- Attachment #1: Type: text/plain, Size: 6912 bytes --]

Hello,

I will provide logs (syslog, passing "debug" to automount) and additional information 
first with this email and comment on my attempts to build a new autofs
later. 

The file ipv4.txt.gz contains the log for a situation where
server and client have GUA and ULA and autofs falls back to IPv4. The
file 2001.txt.gz contains the log for a mount via 2001 GUA in a
situation where server and client have only GUA entries in DNS. The
mount in question is /local/core330.

At the bottom is some more information probably necessary to understand the
attached logs.

If you need some other debug output please let me know how to get it.

As far as I can see the main difference is that in 2001.txt.gz we have
Apr  8 13:31:49 core324 automount[18359]: mount_mount: mount(nfs): calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev core330:/locals /local/core330

and in ipv4.txt.gz
Apr  8 13:27:05 core324 automount[18359]: mount_mount: mount(nfs): calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev 192.168.220.118:/locals /local/core330


Also, I did some more tests using just mount. Just to demonstrate what
the behaviour of mount is. Please observe the result in the last
situation (number 5), there is some round robin (?) behaviour involved I did not expect.
I remember that for multi-homed IPv4 the DNS would resolve randomly one
or the other address. Might be the same for IPv6 ?

To stress the point: I am always only changing DNS AAAA records, not
the machines actual network configuration.

1. Server with 2001 and fd5f, client with 2001:
  mount -v  -t nfs4 core330:/locals /mnt/disk1/
  mount.nfs4: trying text-based options 'addr=fd5f:852:a27c:1261:2000::118,clientaddr=fd5f:852:a27c:1261:2000::104'

2. Server with 2001, client with 2001:
  mount.nfs4: trying text-based options 'addr=2001:638:708:1261:2000::118,clientaddr=2001:638:708:1261:2000::104'

3. Server with 2001, client with 2001 and fd5f:
  mount.nfs4: trying text-based options 'addr=2001:638:708:1261:2000::118,clientaddr=2001:638:708:1261:2000::104'

4. Server with fd5f, client with fd5f:
  mount.nfs4: trying text-based options 'addr=fd5f:852:a27c:1261:2000::118,clientaddr=fd5f:852:a27c:1261:2000::104'

5. Server with 2001 and fd5f, client with 2001 and fd5f: Mounting repeatedly I see both !
  mount.nfs4: trying text-based options 'addr=fd5f:852:a27c:1261:2000::118,clientaddr=fd5f:852:a27c:1261:2000::104'
  mount.nfs4: trying text-based options 'addr=2001:638:708:1261:2000::118,clientaddr=2001:638:708:1261:2000::104'


Best Regards

Christof

The logs contain output for the addtitonal mountpoints /sge, /home and
eventually /usr/local, these are not relevant but in a production environment 
I cannot remove them.

The server in question is core330.bccms.uni-bremen.de, the client is
core324.bccms.uni-bremen.de. 

A dig of the server from the client gives either
core330.bccms.uni-bremen.de. 604800 IN  AAAA 2001:638:708:1261:2000::118
core330.bccms.uni-bremen.de. 604800 IN  AAAA fd5f:852:a27c:1261:2000::118
core330.bccms.uni-bremen.de. 604800 IN  A       192.168.220.118
or without the fd5f ULA.

The client is either
core324.bccms.uni-bremen.de. 604800 IN  AAAA fd5f:852:a27c:1261:2000::104
core324.bccms.uni-bremen.de. 604800 IN  AAAA 2001:638:708:1261:2000::104
core324.bccms.uni-bremen.de. 604800 IN  A       192.168.220.104
or without the fd5f ULA.

I should probably point out that we have a custom addresslabel in place on server
and client to prevent usage of privacy adresses:

root@core324:~# ip -6 addrlabel
prefix ::1/128 label 0 
prefix 2001:638:708:1261:2000::/96 label 99 
prefix 2001:638:708:1261:1000::/96 label 99 
prefix ::/96 label 3 
prefix ::ffff:0.0.0.0/96 label 4 
prefix 2001::/32 label 6 
prefix 2001:10::/28 label 7 
prefix 3ffe::/16 label 12 
prefix 2002::/16 label 2 
prefix fec0::/10 label 11 
prefix fc00::/7 label 5 
prefix ::/0 label 1



On Fri, Apr 08, 2016 at 12:46:00PM +0800, Ian Kent wrote:
> On Thu, 2016-04-07 at 16:19 +0200, Christof Koehler wrote:
> > Hello everybody,
> > 
> > I am on ubuntu 14.04 with autofs 5.0.7 and I observe an (for me)
> > unexpected behaviour as detailed below. Apparently using autofs NFS4
> > mounts fall back to using IPv4 addresses although valid IPv6 addresses
> > are available under certain circumstances, while a plain mount works
> > as
> > expected.
> 
> Can you provide a full debug log.
> 
> It might be autofs interfering with the mount but mount.nfs(8) is a more
> likely candidate.
> 
> > 
> > Setup:
> > ------
> > Both, NFS server and client, are configured with an IPv4 address and
> > an
> > IPv6 GUA and IPv6 ULA. For brevity I will shorten the IPv4 address to
> > 192, the GUA to 2001 and the ULA to fd5f  below. I will only change
> > the
> > DNS AAAA record in the following, the network configuration on
> > server/client or the A records never change. Server and client have
> > always working IPv4 and IPv6 GUA and ULA.
> > 
> > Test with mount:
> > ----------------
> > Using a plain "mount  -t nfs4 server:/locals /mnt/disk1/" on the
> > client
> > gives depending on the DNS entries for the server the expected
> > source/target selection:
> > 
> > Server DNS entry|	client address used to mount
> > 2001		|	2001
> > fd5f		|	fd5f
> > 2001+fd5f	|	fd5f
> > 
> > So in all cases RFC 6724/3484 is observed selecting the addresses.
> > Please note that the server has two AAAA records (multi-homed) in the
> > last test.
> > 
> > Test with autofs:
> > -----------------
> > A map lookup will yield "-fstype=nfs4,rw,intr,nosuid,soft,nodev
> > server:/locals"
> > for the mount. Now I change again the servers AAAA records with the
> > following result:
> > 
> > Server DNS entry|	client address used to mount
> > 2001		|	2001
> > fd5f		|	fd5f
> > 2001+fd5f	|	192
> > 
> > For a multi-homed NFS4 server autofs apparently falls back to IPv4
> > although valid IPv6 options exist. As shown above just mounting
> > without
> > autofs would stick to RFC 6724/3484 instead. I believe that autofs
> > should 
> > also select fd5f ULAs in the multi-homed case.
> > 
> > Is this a known behaviour ? Do any workarounds exist ? I could not
> > find
> > anything.
> > 
> > I tried to compile autofs 5.1.1 with --with-libtirpc because
> > of https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1101779 but
> > could not get the binary to work. I filed a bug report for the
> > behaviour
> > described above 
> > https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1564380
> > but suspect that this is better suited for this list.
> > 
> > Best Regards
> > 
> > Christof
> > 
> --
> To unsubscribe from this list: send the line "unsubscribe autofs" in

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/

[-- Attachment #2: ipv4.txt.gz --]
[-- Type: application/octet-stream, Size: 1563 bytes --]

[-- Attachment #3: 2001.txt.gz --]
[-- Type: application/octet-stream, Size: 3207 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-08 10:10   ` Ian Kent
  2016-04-08 10:14     ` Ian Kent
@ 2016-04-08 12:25     ` Christof Koehler
  2016-04-08 14:29       ` Christof Koehler
  2016-04-09  1:35       ` Ian Kent
  2016-04-11  2:42     ` Ian Kent
  2 siblings, 2 replies; 49+ messages in thread
From: Christof Koehler @ 2016-04-08 12:25 UTC (permalink / raw)
  To: autofs

Hello again,
> I've been thinking about this and I have a couple of thoughts.
> 
> As far a IPv6 goes using glibc RPC is, I think, not going to work!
> 
> That's the first thing that needs to be sorted out.
> 
> I've been using libtirpc in Fedora and RHEL builds for nearly 10 years
> so I don't think the library problem is with autofs.
> 
> This is an indication someone is doing something a little dumb:
> 
> automount[20444]: open_mount:247: parse(sun): cannot open mount module
> nfs (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined symbol:
> clnt_dg_create)

concerning my failures to build autofs. First the client has all
libtirpc packages I think are necessary:
# dpkg -l libtirpc\*|grep ii
ii  libtirpc-dev                                   0.2.2-5ubuntu2
ii  libtirpc1:amd64                                0.2.2-5ubuntu2

We have libtirpc1 on the machines by default and I had to
install libtirpc-dev so that ./configure would conclude that
--with-libtirpc should do anything. 

Actually I tried to compile autofs 5.1.1 from source and a new 5.0.7
package from ubuntu's source deb.

Using the sources at https://www.kernel.org/pub/linux/daemons/autofs/v5/
I was basically confused what to do about the patches. Do I have to
apply everything in patches-5.1.2 to autofs-5.1.1.tar.gz to get 5.1.2 ? 
How do I do that automatically ? I noticed that autofs-5.1.1.tar.gz
misses the patch mentioned in message 15 of
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737679
but contained in autofs-5.1.1-revert-fix-libtirpc-name-clash.patch.

So to make it short I certainly messed something up
somewhere, the final binary and libs were no success . Additionally
installation did not play nice, although --prefix= was set it overwrote
configuration files in /etc.  But I think I
cleaned everything up afterwards.

If someone can provide some hints I would try it again.

After that I rebuild the 5.0.7 package from source deb after adding
--with-libtirpc to debian/rules as suggested in the bug reports. I
installed from that package.  I checked with ldd after installing
that ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so was build with a
reference to libtirpc. 

This try gave the error message in the ubuntu bug
report https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1564380

So, any hints are appreciated. As long as I can stick to 5.0.7 rebuilt
from the source deb installing/re-installing is no problem and I can try
different things you might want. Assuming I can get the program to work :-)

Thank you very much for all your help !


Best Regards

Christof



-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-08 12:25     ` Christof Koehler
@ 2016-04-08 14:29       ` Christof Koehler
  2016-04-08 15:32         ` Christof Koehler
                           ` (2 more replies)
  2016-04-09  1:35       ` Ian Kent
  1 sibling, 3 replies; 49+ messages in thread
From: Christof Koehler @ 2016-04-08 14:29 UTC (permalink / raw)
  To: autofs

[-- Attachment #1: Type: text/plain, Size: 4803 bytes --]

Hello,

apparently I confused my 5.1.1 source built experiment and my debian
package rebuild experiment when I reported that libtirpc was used in my
last email. So here is a new try to rebuild the deb source with
--with-libtirpc.

I did a apt-get source autofs and added --with-libtirpc to debian rules.
After that it would of course not allow me to build a package, "aborting
due to unexpected upstream changes". So I just did a "dpkg-buildpackage
-b" and then dpkg -i autofs... . Attached is the file build.out.gz which
contains the stdout output. Clearly libtirpc is used somehow in the build.

After restoring maps in /etc I did a service restart autofs and with
debug loglevel I get 

Apr  8 16:20:33 core324 automount[14615]: open_mount:247: parse(sun):
cannot open mount module nfs
(/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined symbol:
clnt_dg_create)

as reported. I then double checked and actually

root@core324:~# ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so
        linux-vdso.so.1 =>  (0x00007ffff7ffd000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
(0x00007ffff79f3000)
        /lib64/ld-linux-x86-64.so.2 (0x0000555555554000)

no libtirpc. 

I will have to read up on how to properly rebuild the package. The
debian documentation is unfortunately not very user friendly, any
hints are appreciated.

Best Regards

Christof

On Fri, Apr 08, 2016 at 02:25:52PM +0200, Christof Koehler wrote:
> Hello again,
> > I've been thinking about this and I have a couple of thoughts.
> > 
> > As far a IPv6 goes using glibc RPC is, I think, not going to work!
> > 
> > That's the first thing that needs to be sorted out.
> > 
> > I've been using libtirpc in Fedora and RHEL builds for nearly 10 years
> > so I don't think the library problem is with autofs.
> > 
> > This is an indication someone is doing something a little dumb:
> > 
> > automount[20444]: open_mount:247: parse(sun): cannot open mount module
> > nfs (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined symbol:
> > clnt_dg_create)
> 
> concerning my failures to build autofs. First the client has all
> libtirpc packages I think are necessary:
> # dpkg -l libtirpc\*|grep ii
> ii  libtirpc-dev                                   0.2.2-5ubuntu2
> ii  libtirpc1:amd64                                0.2.2-5ubuntu2
> 
> We have libtirpc1 on the machines by default and I had to
> install libtirpc-dev so that ./configure would conclude that
> --with-libtirpc should do anything. 
> 
> Actually I tried to compile autofs 5.1.1 from source and a new 5.0.7
> package from ubuntu's source deb.
> 
> Using the sources at https://www.kernel.org/pub/linux/daemons/autofs/v5/
> I was basically confused what to do about the patches. Do I have to
> apply everything in patches-5.1.2 to autofs-5.1.1.tar.gz to get 5.1.2 ? 
> How do I do that automatically ? I noticed that autofs-5.1.1.tar.gz
> misses the patch mentioned in message 15 of
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737679
> but contained in autofs-5.1.1-revert-fix-libtirpc-name-clash.patch.
> 
> So to make it short I certainly messed something up
> somewhere, the final binary and libs were no success . Additionally
> installation did not play nice, although --prefix= was set it overwrote
> configuration files in /etc.  But I think I
> cleaned everything up afterwards.
> 
> If someone can provide some hints I would try it again.
> 
> After that I rebuild the 5.0.7 package from source deb after adding
> --with-libtirpc to debian/rules as suggested in the bug reports. I
> installed from that package.  I checked with ldd after installing
> that ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so was build with a
> reference to libtirpc. 
> 
> This try gave the error message in the ubuntu bug
> report https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1564380
> 
> So, any hints are appreciated. As long as I can stick to 5.0.7 rebuilt
> from the source deb installing/re-installing is no problem and I can try
> different things you might want. Assuming I can get the program to work :-)
> 
> Thank you very much for all your help !
> 
> 
> Best Regards
> 
> Christof
> 
> 
> 
> -- 
> Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
> Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> 28359 Bremen  
> 
> PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> --
> To unsubscribe from this list: send the line "unsubscribe autofs" in

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/

[-- Attachment #2: build.out.gz --]
[-- Type: application/octet-stream, Size: 4419 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-08 14:29       ` Christof Koehler
@ 2016-04-08 15:32         ` Christof Koehler
  2016-04-10  2:09           ` Ian Kent
  2016-04-08 16:12         ` Christof Koehler
  2016-04-09  1:42         ` Ian Kent
  2 siblings, 1 reply; 49+ messages in thread
From: Christof Koehler @ 2016-04-08 15:32 UTC (permalink / raw)
  To: autofs

Hello,

I might add that using an IPv6 address, i.e.
[2001:638:708:1261:2000::118]:/locals, in the automounter map does not
work either while an IPv4 address, i.e. 192.168.220.118:/locals, works
without a hitch. Is it desirable to be able to specify IPv6 addresses  ?
I am just noting the inconsistency.

Have a nice weekend !

Apr  8 17:25:24 core324 automount[24310]: attempting to mount entry /local/core330
Apr  8 17:25:24 core324 automount[24310]: lookup_mount: lookup(program): core330 -> -fstype=nfs4,rw,intr,nosuid,soft,nodev 192.168.220.118:/locals
Apr  8 17:25:24 core324 automount[24310]: lookup_mount: lookup(program): looking up core330
Apr  8 17:25:24 core324 automount[24310]: lookup_mount: lookup(program): core330 -> -fstype=nfs4,rw,intr,nosuid,soft,nodev [2001:638:708:1261:2000::118]:/locals
Apr  8 17:25:24 core324 automount[24310]: parse_mount: parse(sun): expanded entry: -fstype=nfs4,rw,intr,nosuid,soft,nodev [2001:638:708:1261:2000::118]:/locals
Apr  8 17:25:24 core324 automount[24310]: parse_mount: parse(sun): gathered options: fstype=nfs4,rw,intr,nosuid,soft,nodev
Apr  8 17:25:24 core324 automount[24310]: parse_mount: parse(sun): dequote("[2001:638:708:1261:2000::118]:/locals") -> [2001:638:708:1261:2000::118]:/locals
Apr  8 17:25:24 core324 automount[24310]: parse_mount: parse(sun): core of entry: options=fstype=nfs4,rw,intr,nosuid,soft,nodev,loc=[2001:638:708:1261:2000::118]:/locals
Apr  8 17:25:24 core324 automount[24310]: sun_mount: parse(sun): mounting root /local, mountpoint core330, what [2001:638:708:1261:2000::118]:/locals, fstype nfs4, options rw,intr,nosuid,soft,nodev
Apr  8 17:25:24 core324 automount[24310]: mount_mount: mount(nfs): root=/local name=core33
0 what=[2001:638:708:1261:2000::118]:/locals, fstype=nfs4, options=rw,intr,nosuid,soft,nod
ev
Apr  8 17:25:24 core324 automount[24310]: mount_mount: mount(nfs): nfs options="rw,intr,no
suid,soft,nodev", nobind=0, nosymlink=0, ro=0
Apr  8 17:25:24 core324 automount[24310]: mount(nfs): no hosts available
Apr  8 17:25:24 core324 automount[24310]: dev_ioctl_send_fail: token = 1923
Apr  8 17:25:24 core324 automount[24310]: failed to mount /local/core330
Apr  8 17:25:24 core324 automount[24310]: handle_packet: type = 3
Apr  8 17:25:24 core324 automount[24310]: handle_packet_missing_indirect: token 1924, name core330, request pid 24519
Apr  8 17:25:24 core324 automount[24310]: dev_ioctl_send_fail: token = 1924

Best Regards

Christof



-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-08 14:29       ` Christof Koehler
  2016-04-08 15:32         ` Christof Koehler
@ 2016-04-08 16:12         ` Christof Koehler
  2016-04-08 16:15           ` Christof Koehler
  2016-04-10  2:14           ` Ian Kent
  2016-04-09  1:42         ` Ian Kent
  2 siblings, 2 replies; 49+ messages in thread
From: Christof Koehler @ 2016-04-08 16:12 UTC (permalink / raw)
  To: autofs

Hello,

I managed to build a new autofs 5.1.1 from the ubuntu 16.04 source
package after throwing out all systemd dependencies. With that

root@core324:~/autofs-5.1.1# ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so
        linux-vdso.so.1 =>  (0x00007ffff7ffd000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007ffff7b73000)
        libtirpc.so.1 => /lib/x86_64-linux-gnu/libtirpc.so.1 (0x00007ffff794b000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffff7585000)
        /lib64/ld-linux-x86-64.so.2 (0x0000555555554000)
        libgssglue.so.1 => /lib/x86_64-linux-gnu/libgssglue.so.1 (0x00007ffff737b000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007ffff7177000)

However, I had to edit lib/rpc_subs.c as indicated in 
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737679, otherwise it would not 
work (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined symbol: auth_put).

With the new binary /local/core330 does not mount either, but the error message is
probably an improvement:

Apr  8 18:05:25 core324 automount[963]: parse_mount: parse(sun): dequote("core330:/locals"
) -> core330:/locals
Apr  8 18:05:25 core324 automount[963]: parse_mount: parse(sun): core of entry: options=fs
type=nfs4,rw,intr,nosuid,soft,nodev, loc=core330:/locals
Apr  8 18:05:25 core324 automount[963]: sun_mount: parse(sun): mounting root /local, mount
point core330, what core330:/locals, fstype nfs4, options rw,intr,nosuid,soft,nodev
Apr  8 18:05:25 core324 automount[963]: mount_mount: mount(nfs): root=/local name=core330 
what=core330:/locals, fstype=nfs4, options=rw,intr,nosuid,soft,nodev
Apr  8 18:05:25 core324 automount[963]: mount_mount: mount(nfs): nfs options="rw,intr,nosu
id,soft,nodev", nobind=0, nosymlink=0, ro=0
Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with host core330(192.168.220
.118) proto 6 version 0x40
Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with host core330(fd5f:852:a27c:1261:2000::118) proto 6 version 0x40
Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with host core330(2001:638:708:1261:2000::118) proto 6 version 0x40
Apr  8 18:05:25 core324 automount[963]: mount(nfs): no hosts available
Apr  8 18:05:25 core324 automount[963]: dev_ioctl_send_fail: token = 3004
Apr  8 18:05:25 core324 automount[963]: failed to mount /local/core330
Apr  8 18:05:25 core324 automount[963]: handle_packet: type = 3
Apr  8 18:05:25 core324 automount[963]: handle_packet_missing_indirect: token 3005, name core330, request pid 2397

I had no success trying to get the 5.0.7 source package to actually link libtirpc, 
no idea why.
 
Best Regards

Christof

On Fri, Apr 08, 2016 at 04:29:07PM +0200, Christof Koehler wrote:
> Hello,
> 
> apparently I confused my 5.1.1 source built experiment and my debian
> package rebuild experiment when I reported that libtirpc was used in my
> last email. So here is a new try to rebuild the deb source with
> --with-libtirpc.
> 
> I did a apt-get source autofs and added --with-libtirpc to debian rules.
> After that it would of course not allow me to build a package, "aborting
> due to unexpected upstream changes". So I just did a "dpkg-buildpackage
> -b" and then dpkg -i autofs... . Attached is the file build.out.gz which
> contains the stdout output. Clearly libtirpc is used somehow in the build.
> 
> After restoring maps in /etc I did a service restart autofs and with
> debug loglevel I get 
> 
> Apr  8 16:20:33 core324 automount[14615]: open_mount:247: parse(sun):
> cannot open mount module nfs
> (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined symbol:
> clnt_dg_create)
> 
> as reported. I then double checked and actually
> 
> root@core324:~# ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so
>         linux-vdso.so.1 =>  (0x00007ffff7ffd000)
>         libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
> (0x00007ffff79f3000)
>         /lib64/ld-linux-x86-64.so.2 (0x0000555555554000)
> 
> no libtirpc. 
> 
> I will have to read up on how to properly rebuild the package. The
> debian documentation is unfortunately not very user friendly, any
> hints are appreciated.
> 
> Best Regards
> 
> Christof
> 
> On Fri, Apr 08, 2016 at 02:25:52PM +0200, Christof Koehler wrote:
> > Hello again,
> > > I've been thinking about this and I have a couple of thoughts.
> > > 
> > > As far a IPv6 goes using glibc RPC is, I think, not going to work!
> > > 
> > > That's the first thing that needs to be sorted out.
> > > 
> > > I've been using libtirpc in Fedora and RHEL builds for nearly 10 years
> > > so I don't think the library problem is with autofs.
> > > 
> > > This is an indication someone is doing something a little dumb:
> > > 
> > > automount[20444]: open_mount:247: parse(sun): cannot open mount module
> > > nfs (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined symbol:
> > > clnt_dg_create)
> > 
> > concerning my failures to build autofs. First the client has all
> > libtirpc packages I think are necessary:
> > # dpkg -l libtirpc\*|grep ii
> > ii  libtirpc-dev                                   0.2.2-5ubuntu2
> > ii  libtirpc1:amd64                                0.2.2-5ubuntu2
> > 
> > We have libtirpc1 on the machines by default and I had to
> > install libtirpc-dev so that ./configure would conclude that
> > --with-libtirpc should do anything. 
> > 
> > Actually I tried to compile autofs 5.1.1 from source and a new 5.0.7
> > package from ubuntu's source deb.
> > 
> > Using the sources at https://www.kernel.org/pub/linux/daemons/autofs/v5/
> > I was basically confused what to do about the patches. Do I have to
> > apply everything in patches-5.1.2 to autofs-5.1.1.tar.gz to get 5.1.2 ? 
> > How do I do that automatically ? I noticed that autofs-5.1.1.tar.gz
> > misses the patch mentioned in message 15 of
> > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737679
> > but contained in autofs-5.1.1-revert-fix-libtirpc-name-clash.patch.
> > 
> > So to make it short I certainly messed something up
> > somewhere, the final binary and libs were no success . Additionally
> > installation did not play nice, although --prefix= was set it overwrote
> > configuration files in /etc.  But I think I
> > cleaned everything up afterwards.
> > 
> > If someone can provide some hints I would try it again.
> > 
> > After that I rebuild the 5.0.7 package from source deb after adding
> > --with-libtirpc to debian/rules as suggested in the bug reports. I
> > installed from that package.  I checked with ldd after installing
> > that ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so was build with a
> > reference to libtirpc. 
> > 
> > This try gave the error message in the ubuntu bug
> > report https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1564380
> > 
> > So, any hints are appreciated. As long as I can stick to 5.0.7 rebuilt
> > from the source deb installing/re-installing is no problem and I can try
> > different things you might want. Assuming I can get the program to work :-)
> > 
> > Thank you very much for all your help !
> > 
> > 
> > Best Regards
> > 
> > Christof
> > 
> > 
> > 
> > -- 
> > Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
> > Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> > Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> > 28359 Bremen  
> > 
> > PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> > --
> > To unsubscribe from this list: send the line "unsubscribe autofs" in
> 
> -- 
> Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
> Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> 28359 Bremen  
> 
> PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/



-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-08 16:12         ` Christof Koehler
@ 2016-04-08 16:15           ` Christof Koehler
  2016-04-10  2:17             ` Ian Kent
  2016-04-10  2:14           ` Ian Kent
  1 sibling, 1 reply; 49+ messages in thread
From: Christof Koehler @ 2016-04-08 16:15 UTC (permalink / raw)
  To: autofs

OK, this actually fails to mount anything at all

Apr  8 18:12:53 core324 automount[963]: parse_mount: parse(sun): core of entry: options=fstype=nfs4,rw,intr,nosuid,soft,nodev, loc=core320:/locals
Apr  8 18:12:53 core324 automount[963]: sun_mount: parse(sun): mounting root /local, mountpoint core320, what core320:/locals, fstype nfs4, options rw,intr,nosuid,soft,nodev
Apr  8 18:12:53 core324 automount[963]: mount_mount: mount(nfs): root=/local name=core320 what=core320:/locals, fstype=nfs4, options=rw,intr,nosuid,soft,nodev
Apr  8 18:12:53 core324 automount[963]: mount_mount: mount(nfs): nfs options="rw,intr,nosuid,soft,nodev", nobind=0, nosymlink=0, ro=0
Apr  8 18:12:53 core324 automount[963]: get_nfs_info: called with host core320(192.168.220.70) proto 6 version 0x40
Apr  8 18:12:53 core324 automount[963]: get_nfs_info: called with host core320(2001:638:708:1261:2000::70) proto 6 version 0x40
Apr  8 18:12:53 core324 automount[963]: mount(nfs): no hosts available
Apr  8 18:12:53 core324 automount[963]: dev_ioctl_send_fail: token = 3006
Apr  8 18:12:53 core324 automount[963]: failed to mount /local/core320
Apr  8 18:12:53 core324 automount[963]: handle_packet: type = 3

Maybe I am missing dependencies ... 


On Fri, Apr 08, 2016 at 06:12:26PM +0200, Christof Koehler wrote:
> Hello,
> 
> I managed to build a new autofs 5.1.1 from the ubuntu 16.04 source
> package after throwing out all systemd dependencies. With that
> 
> root@core324:~/autofs-5.1.1# ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so
>         linux-vdso.so.1 =>  (0x00007ffff7ffd000)
>         libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007ffff7b73000)
>         libtirpc.so.1 => /lib/x86_64-linux-gnu/libtirpc.so.1 (0x00007ffff794b000)
>         libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffff7585000)
>         /lib64/ld-linux-x86-64.so.2 (0x0000555555554000)
>         libgssglue.so.1 => /lib/x86_64-linux-gnu/libgssglue.so.1 (0x00007ffff737b000)
>         libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007ffff7177000)
> 
> However, I had to edit lib/rpc_subs.c as indicated in 
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737679, otherwise it would not 
> work (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined symbol: auth_put).
> 
> With the new binary /local/core330 does not mount either, but the error message is
> probably an improvement:
> 
> Apr  8 18:05:25 core324 automount[963]: parse_mount: parse(sun): dequote("core330:/locals"
> ) -> core330:/locals
> Apr  8 18:05:25 core324 automount[963]: parse_mount: parse(sun): core of entry: options=fs
> type=nfs4,rw,intr,nosuid,soft,nodev, loc=core330:/locals
> Apr  8 18:05:25 core324 automount[963]: sun_mount: parse(sun): mounting root /local, mount
> point core330, what core330:/locals, fstype nfs4, options rw,intr,nosuid,soft,nodev
> Apr  8 18:05:25 core324 automount[963]: mount_mount: mount(nfs): root=/local name=core330 
> what=core330:/locals, fstype=nfs4, options=rw,intr,nosuid,soft,nodev
> Apr  8 18:05:25 core324 automount[963]: mount_mount: mount(nfs): nfs options="rw,intr,nosu
> id,soft,nodev", nobind=0, nosymlink=0, ro=0
> Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with host core330(192.168.220
> .118) proto 6 version 0x40
> Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with host core330(fd5f:852:a27c:1261:2000::118) proto 6 version 0x40
> Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with host core330(2001:638:708:1261:2000::118) proto 6 version 0x40
> Apr  8 18:05:25 core324 automount[963]: mount(nfs): no hosts available
> Apr  8 18:05:25 core324 automount[963]: dev_ioctl_send_fail: token = 3004
> Apr  8 18:05:25 core324 automount[963]: failed to mount /local/core330
> Apr  8 18:05:25 core324 automount[963]: handle_packet: type = 3
> Apr  8 18:05:25 core324 automount[963]: handle_packet_missing_indirect: token 3005, name core330, request pid 2397
> 
> I had no success trying to get the 5.0.7 source package to actually link libtirpc, 
> no idea why.
>  
> Best Regards
> 
> Christof
> 
> On Fri, Apr 08, 2016 at 04:29:07PM +0200, Christof Koehler wrote:
> > Hello,
> > 
> > apparently I confused my 5.1.1 source built experiment and my debian
> > package rebuild experiment when I reported that libtirpc was used in my
> > last email. So here is a new try to rebuild the deb source with
> > --with-libtirpc.
> > 
> > I did a apt-get source autofs and added --with-libtirpc to debian rules.
> > After that it would of course not allow me to build a package, "aborting
> > due to unexpected upstream changes". So I just did a "dpkg-buildpackage
> > -b" and then dpkg -i autofs... . Attached is the file build.out.gz which
> > contains the stdout output. Clearly libtirpc is used somehow in the build.
> > 
> > After restoring maps in /etc I did a service restart autofs and with
> > debug loglevel I get 
> > 
> > Apr  8 16:20:33 core324 automount[14615]: open_mount:247: parse(sun):
> > cannot open mount module nfs
> > (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined symbol:
> > clnt_dg_create)
> > 
> > as reported. I then double checked and actually
> > 
> > root@core324:~# ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so
> >         linux-vdso.so.1 =>  (0x00007ffff7ffd000)
> >         libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
> > (0x00007ffff79f3000)
> >         /lib64/ld-linux-x86-64.so.2 (0x0000555555554000)
> > 
> > no libtirpc. 
> > 
> > I will have to read up on how to properly rebuild the package. The
> > debian documentation is unfortunately not very user friendly, any
> > hints are appreciated.
> > 
> > Best Regards
> > 
> > Christof
> > 
> > On Fri, Apr 08, 2016 at 02:25:52PM +0200, Christof Koehler wrote:
> > > Hello again,
> > > > I've been thinking about this and I have a couple of thoughts.
> > > > 
> > > > As far a IPv6 goes using glibc RPC is, I think, not going to work!
> > > > 
> > > > That's the first thing that needs to be sorted out.
> > > > 
> > > > I've been using libtirpc in Fedora and RHEL builds for nearly 10 years
> > > > so I don't think the library problem is with autofs.
> > > > 
> > > > This is an indication someone is doing something a little dumb:
> > > > 
> > > > automount[20444]: open_mount:247: parse(sun): cannot open mount module
> > > > nfs (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined symbol:
> > > > clnt_dg_create)
> > > 
> > > concerning my failures to build autofs. First the client has all
> > > libtirpc packages I think are necessary:
> > > # dpkg -l libtirpc\*|grep ii
> > > ii  libtirpc-dev                                   0.2.2-5ubuntu2
> > > ii  libtirpc1:amd64                                0.2.2-5ubuntu2
> > > 
> > > We have libtirpc1 on the machines by default and I had to
> > > install libtirpc-dev so that ./configure would conclude that
> > > --with-libtirpc should do anything. 
> > > 
> > > Actually I tried to compile autofs 5.1.1 from source and a new 5.0.7
> > > package from ubuntu's source deb.
> > > 
> > > Using the sources at https://www.kernel.org/pub/linux/daemons/autofs/v5/
> > > I was basically confused what to do about the patches. Do I have to
> > > apply everything in patches-5.1.2 to autofs-5.1.1.tar.gz to get 5.1.2 ? 
> > > How do I do that automatically ? I noticed that autofs-5.1.1.tar.gz
> > > misses the patch mentioned in message 15 of
> > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737679
> > > but contained in autofs-5.1.1-revert-fix-libtirpc-name-clash.patch.
> > > 
> > > So to make it short I certainly messed something up
> > > somewhere, the final binary and libs were no success . Additionally
> > > installation did not play nice, although --prefix= was set it overwrote
> > > configuration files in /etc.  But I think I
> > > cleaned everything up afterwards.
> > > 
> > > If someone can provide some hints I would try it again.
> > > 
> > > After that I rebuild the 5.0.7 package from source deb after adding
> > > --with-libtirpc to debian/rules as suggested in the bug reports. I
> > > installed from that package.  I checked with ldd after installing
> > > that ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so was build with a
> > > reference to libtirpc. 
> > > 
> > > This try gave the error message in the ubuntu bug
> > > report https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1564380
> > > 
> > > So, any hints are appreciated. As long as I can stick to 5.0.7 rebuilt
> > > from the source deb installing/re-installing is no problem and I can try
> > > different things you might want. Assuming I can get the program to work :-)
> > > 
> > > Thank you very much for all your help !
> > > 
> > > 
> > > Best Regards
> > > 
> > > Christof
> > > 
> > > 
> > > 
> > > -- 
> > > Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
> > > Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> > > Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> > > 28359 Bremen  
> > > 
> > > PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe autofs" in
> > 
> > -- 
> > Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
> > Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> > Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> > 28359 Bremen  
> > 
> > PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> 
> 
> 
> -- 
> Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
> Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> 28359 Bremen  
> 
> PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> --
> To unsubscribe from this list: send the line "unsubscribe autofs" in

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-08 12:25     ` Christof Koehler
  2016-04-08 14:29       ` Christof Koehler
@ 2016-04-09  1:35       ` Ian Kent
  1 sibling, 0 replies; 49+ messages in thread
From: Ian Kent @ 2016-04-09  1:35 UTC (permalink / raw)
  To: christof.koehler, autofs

On Fri, 2016-04-08 at 14:25 +0200, Christof Koehler wrote:
> Hello again,
> > I've been thinking about this and I have a couple of thoughts.
> > 
> > As far a IPv6 goes using glibc RPC is, I think, not going to work!
> > 
> > That's the first thing that needs to be sorted out.
> > 
> > I've been using libtirpc in Fedora and RHEL builds for nearly 10
> > years
> > so I don't think the library problem is with autofs.
> > 
> > This is an indication someone is doing something a little dumb:
> > 
> > automount[20444]: open_mount:247: parse(sun): cannot open mount
> > module
> > nfs (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined
> > symbol:
> > clnt_dg_create)
> 
> concerning my failures to build autofs. First the client has all
> libtirpc packages I think are necessary:
> # dpkg -l libtirpc\*|grep ii
> ii  libtirpc-dev                                   0.2.2-5ubuntu2
> ii  libtirpc1:amd64                                0.2.2-5ubuntu2
> 
> We have libtirpc1 on the machines by default and I had to
> install libtirpc-dev so that ./configure would conclude that
> --with-libtirpc should do anything. 
> 
> Actually I tried to compile autofs 5.1.1 from source and a new 5.0.7
> package from ubuntu's source deb.
> 
> Using the sources at 
> https://www.kernel.org/pub/linux/daemons/autofs/v5/
> I was basically confused what to do about the patches. Do I have to
> apply everything in patches-5.1.2 to autofs-5.1.1.tar.gz to get 5.1.2
> ? 

That's entirely up to you.
Those patches are the patches that have been committed to the repo. so
far toward the unreleased 5.1.2.

> How do I do that automatically ? I noticed that autofs-5.1.1.tar.gz

I don't know what you mean.

If your talking about building a deb then there's a place in the build
tree for patches where each patch usually has a number prefix to
establish order. They are then applied prior to the configure step I
think.

If your talking about the autofs source then you could try quilt.
I pretty sure the patch order file will work with it.

> misses the patch mentioned in message 15 of
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737679
> but contained in autofs-5.1.1-revert-fix-libtirpc-name-clash.patch.

Whether that patch is needed or not depends on the version of libtirpc.

At one point it had a define in one of the header files that clashed
with an internal name in autofs.

That's been removed in later versions of libtirpc (and I think wasn't
present in earlier versions, don't know the specifics).

> 
> So to make it short I certainly messed something up
> somewhere, the final binary and libs were no success . Additionally
> installation did not play nice, although --prefix= was set it
> overwrote
> configuration files in /etc.  But I think I
> cleaned everything up afterwards.

Again, not clear if your talking deb or source install.

I'm pretty sure the make install will copy existing configs to a backup,
not sure what happens in a Debian type environment.

The deb install should certainly consider existing configuration.

Not sure what you mean about the prefix.
The prefix is meant to alter the default install directories.
If your using a package build system then typically you install to some
other directory by using DESTDIR (IIRC) during the package build and the
build system packages the installed files from DESTDIR down.

> 
> If someone can provide some hints I would try it again.
> 
> After that I rebuild the 5.0.7 package from source deb after adding
> --with-libtirpc to debian/rules as suggested in the bug reports. I
> installed from that package.  I checked with ldd after installing
> that ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so was build with
> a
> reference to libtirpc. 

I looked at a log you provided and the build does appear to use libtirpc
and version 0.2.2 of libtirpc certainly has clnt_dg_create().

So not sure about that.

TBH I really don't want to construct a Ubuntu VM and struggle with the
Debian build system. I really don't like it and its been so long since I
used it, it would takes ages to work out what I need to do, again.

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-08 14:29       ` Christof Koehler
  2016-04-08 15:32         ` Christof Koehler
  2016-04-08 16:12         ` Christof Koehler
@ 2016-04-09  1:42         ` Ian Kent
  2016-04-09  9:56           ` Christof Koehler
  2 siblings, 1 reply; 49+ messages in thread
From: Ian Kent @ 2016-04-09  1:42 UTC (permalink / raw)
  To: christof.koehler, autofs

On Fri, 2016-04-08 at 16:29 +0200, Christof Koehler wrote:
> Hello,
> 
> apparently I confused my 5.1.1 source built experiment and my debian
> package rebuild experiment when I reported that libtirpc was used in
> my
> last email. So here is a new try to rebuild the deb source with
> --with-libtirpc.
> 
> I did a apt-get source autofs and added --with-libtirpc to debian
> rules.
> After that it would of course not allow me to build a package,
> "aborting
> due to unexpected upstream changes". So I just did a "dpkg
> -buildpackage
> -b" and then dpkg -i autofs... . Attached is the file build.out.gz
> which
> contains the stdout output. Clearly libtirpc is used somehow in the
> build.
> 
> After restoring maps in /etc I did a service restart autofs and with
> debug loglevel I get 
> 
> Apr  8 16:20:33 core324 automount[14615]: open_mount:247: parse(sun):
> cannot open mount module nfs
> (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined symbol:
> clnt_dg_create)
> 
> as reported. I then double checked and actually
> 
> root@core324:~# ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so
>         linux-vdso.so.1 =>  (0x00007ffff7ffd000)
>         libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
> (0x00007ffff79f3000)
>         /lib64/ld-linux-x86-64.so.2 (0x0000555555554000)
> 
> no libtirpc. 

And yet the build from the log looks ok....
There's no even a link entry there which implies it hasn't been built
using libtirpc but the build looks like it is using it... puzzling.

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-09  1:42         ` Ian Kent
@ 2016-04-09  9:56           ` Christof Koehler
  2016-04-10  2:29             ` Ian Kent
  2016-04-25  4:40             ` Ian Kent
  0 siblings, 2 replies; 49+ messages in thread
From: Christof Koehler @ 2016-04-09  9:56 UTC (permalink / raw)
  To: autofs

Hello,

yes, indeed the 5.0.7 source deb rebuild is very strange. I also checked
the libraries under debian/usr/lib/... in the build directory and they
do not contain a libtirpc reference.

I can see two possible strategies:
1.  Concentrate on the 5.1.1 deb source I grabbed from ubuntu 16.04 and the 
deb package rebuild from that. Those libraries obviously contain a reference
to libtirpc but do not mount anything. Too old libtirpc comes to mind then ?
2.  Another approach would be to wait for 16.04 release. I will start
deploying 16.04.1 once it is release in the second half of this year. I
would then try to rebuild from the deb source including libtirpc (which
would be 0.2.5) again and report back. So, basically defering this.

If you do not have another idea to get the 5.1.1 deb package rebuild to
work I would think that defering and trying with matched and current
versions is the right thing to do.

Finally to clarify: the comments on manually applying patches and --prefix= refer to 
building from the pristine source available 
at https://www.kernel.org/pub/linux/daemons/autofs/v5/, no deb
source or package involved. I was working outside the package
system. But obviously I do not know enough about building
from the pristine source.

Thank your very much !

Best Regards

Christof





On Sat, Apr 09, 2016 at 09:42:06AM +0800, Ian Kent wrote:
> On Fri, 2016-04-08 at 16:29 +0200, Christof Koehler wrote:
> > Hello,
> > 
> > apparently I confused my 5.1.1 source built experiment and my debian
> > package rebuild experiment when I reported that libtirpc was used in
> > my
> > last email. So here is a new try to rebuild the deb source with
> > --with-libtirpc.
> > 
> > I did a apt-get source autofs and added --with-libtirpc to debian
> > rules.
> > After that it would of course not allow me to build a package,
> > "aborting
> > due to unexpected upstream changes". So I just did a "dpkg
> > -buildpackage
> > -b" and then dpkg -i autofs... . Attached is the file build.out.gz
> > which
> > contains the stdout output. Clearly libtirpc is used somehow in the
> > build.
> > 
> > After restoring maps in /etc I did a service restart autofs and with
> > debug loglevel I get 
> > 
> > Apr  8 16:20:33 core324 automount[14615]: open_mount:247: parse(sun):
> > cannot open mount module nfs
> > (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined symbol:
> > clnt_dg_create)
> > 
> > as reported. I then double checked and actually
> > 
> > root@core324:~# ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so
> >         linux-vdso.so.1 =>  (0x00007ffff7ffd000)
> >         libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
> > (0x00007ffff79f3000)
> >         /lib64/ld-linux-x86-64.so.2 (0x0000555555554000)
> > 
> > no libtirpc. 
> 
> And yet the build from the log looks ok....
> There's no even a link entry there which implies it hasn't been built
> using libtirpc but the build looks like it is using it... puzzling.
> 
> Ian
> --
> To unsubscribe from this list: send the line "unsubscribe autofs" in

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-08 15:32         ` Christof Koehler
@ 2016-04-10  2:09           ` Ian Kent
  0 siblings, 0 replies; 49+ messages in thread
From: Ian Kent @ 2016-04-10  2:09 UTC (permalink / raw)
  To: christof.koehler, autofs

On Fri, 2016-04-08 at 17:32 +0200, Christof Koehler wrote:
> Hello,
> 
> I might add that using an IPv6 address, i.e.
> [2001:638:708:1261:2000::118]:/locals, in the automounter map does not
> work either while an IPv4 address, i.e. 192.168.220.118:/locals, works
> without a hitch. Is it desirable to be able to specify IPv6 addresses 
>  ?
> I am just noting the inconsistency.

It's not entirely straight forward to answer that due to the way autofs
will probe availability and proximity to work out which server to use.

That's possibly part of the multi-homed problem your seeing and whether
autofs should restrict itself to only IPv6 addresses is, I suspect, a
question we will end up asking.

But there seems to be more to this at the moment so that needs to be
resolved first.

The short answer is that IPv6 addresses should be usable but there have 
been changes along the way to how autofs uses libtirpc.

> 
> Have a nice weekend !
> 
> Apr  8 17:25:24 core324 automount[24310]: attempting to mount entry
> /local/core330
> Apr  8 17:25:24 core324 automount[24310]: lookup_mount:
> lookup(program): core330 -> -fstype=nfs4,rw,intr,nosuid,soft,nodev
> 192.168.220.118:/locals
> Apr  8 17:25:24 core324 automount[24310]: lookup_mount:
> lookup(program): looking up core330
> Apr  8 17:25:24 core324 automount[24310]: lookup_mount:
> lookup(program): core330 -> -fstype=nfs4,rw,intr,nosuid,soft,nodev
> [2001:638:708:1261:2000::118]:/locals
> Apr  8 17:25:24 core324 automount[24310]: parse_mount: parse(sun):
> expanded entry: -fstype=nfs4,rw,intr,nosuid,soft,nodev
> [2001:638:708:1261:2000::118]:/locals
> Apr  8 17:25:24 core324 automount[24310]: parse_mount: parse(sun):
> gathered options: fstype=nfs4,rw,intr,nosuid,soft,nodev
> Apr  8 17:25:24 core324 automount[24310]: parse_mount: parse(sun):
> dequote("[2001:638:708:1261:2000::118]:/locals") ->
> [2001:638:708:1261:2000::118]:/locals
> Apr  8 17:25:24 core324 automount[24310]: parse_mount: parse(sun):
> core of entry:
> options=fstype=nfs4,rw,intr,nosuid,soft,nodev,loc=[2001:638:708:1261:2
> 000::118]:/locals
> Apr  8 17:25:24 core324 automount[24310]: sun_mount: parse(sun):
> mounting root /local, mountpoint core330, what
> [2001:638:708:1261:2000::118]:/locals, fstype nfs4, options
> rw,intr,nosuid,soft,nodev
> Apr  8 17:25:24 core324 automount[24310]: mount_mount: mount(nfs):
> root=/local name=core33
> 0 what=[2001:638:708:1261:2000::118]:/locals, fstype=nfs4,
> options=rw,intr,nosuid,soft,nod
> ev
> Apr  8 17:25:24 core324 automount[24310]: mount_mount: mount(nfs): nfs
> options="rw,intr,no
> suid,soft,nodev", nobind=0, nosymlink=0, ro=0
> Apr  8 17:25:24 core324 automount[24310]: mount(nfs): no hosts
> available
> Apr  8 17:25:24 core324 automount[24310]: dev_ioctl_send_fail: token =
> 1923
> Apr  8 17:25:24 core324 automount[24310]: failed to mount
> /local/core330
> Apr  8 17:25:24 core324 automount[24310]: handle_packet: type = 3
> Apr  8 17:25:24 core324 automount[24310]:
> handle_packet_missing_indirect: token 1924, name core330, request pid
> 24519
> Apr  8 17:25:24 core324 automount[24310]: dev_ioctl_send_fail: token =
> 1924
> 
> Best Regards
> 
> Christof
> 
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-08 16:12         ` Christof Koehler
  2016-04-08 16:15           ` Christof Koehler
@ 2016-04-10  2:14           ` Ian Kent
  1 sibling, 0 replies; 49+ messages in thread
From: Ian Kent @ 2016-04-10  2:14 UTC (permalink / raw)
  To: christof.koehler, autofs

On Fri, 2016-04-08 at 18:12 +0200, Christof Koehler wrote:
> Hello,
> 
> I managed to build a new autofs 5.1.1 from the ubuntu 16.04 source
> package after throwing out all systemd dependencies. With that
> 
> root@core324:~/autofs-5.1.1# ldd /usr/lib/x86_64-linux
> -gnu/autofs/mount_nfs.so
>         linux-vdso.so.1 =>  (0x00007ffff7ffd000)
>         libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
> (0x00007ffff7b73000)
>         libtirpc.so.1 => /lib/x86_64-linux-gnu/libtirpc.so.1
> (0x00007ffff794b000)
>         libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
> (0x00007ffff7585000)
>         /lib64/ld-linux-x86-64.so.2 (0x0000555555554000)
>         libgssglue.so.1 => /lib/x86_64-linux-gnu/libgssglue.so.1
> (0x00007ffff737b000)
>         libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2
> (0x00007ffff7177000)
> 
> However, I had to edit lib/rpc_subs.c as indicated in 
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737679, otherwise it
> would not 
> work (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined symbol:
> auth_put).

I think that is caused the libtirpc problem you asked about earlier.
As I say it depends on the version of libtirpc in use.

> 
> With the new binary /local/core330 does not mount either, but the
> error message is
> probably an improvement:
> 
> Apr  8 18:05:25 core324 automount[963]: parse_mount: parse(sun):
> dequote("core330:/locals"
> ) -> core330:/locals
> Apr  8 18:05:25 core324 automount[963]: parse_mount: parse(sun): core
> of entry: options=fs
> type=nfs4,rw,intr,nosuid,soft,nodev, loc=core330:/locals
> Apr  8 18:05:25 core324 automount[963]: sun_mount: parse(sun):
> mounting root /local, mount
> point core330, what core330:/locals, fstype nfs4, options
> rw,intr,nosuid,soft,nodev
> Apr  8 18:05:25 core324 automount[963]: mount_mount: mount(nfs):
> root=/local name=core330 
> what=core330:/locals, fstype=nfs4, options=rw,intr,nosuid,soft,nodev
> Apr  8 18:05:25 core324 automount[963]: mount_mount: mount(nfs): nfs
> options="rw,intr,nosu
> id,soft,nodev", nobind=0, nosymlink=0, ro=0
> Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with host
> core330(192.168.220
> .118) proto 6 version 0x40
> Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with host
> core330(fd5f:852:a27c:1261:2000::118) proto 6 version 0x40
> Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with host
> core330(2001:638:708:1261:2000::118) proto 6 version 0x40
> Apr  8 18:05:25 core324 automount[963]: mount(nfs): no hosts available

Sadly that doesn't tell us much either, only that the rpc communication
has failed to get a result in some expected way.

I'm not sure where to go from here....

> Apr  8 18:05:25 core324 automount[963]: dev_ioctl_send_fail: token =
> 3004
> Apr  8 18:05:25 core324 automount[963]: failed to mount /local/core330
> Apr  8 18:05:25 core324 automount[963]: handle_packet: type = 3
> Apr  8 18:05:25 core324 automount[963]:
> handle_packet_missing_indirect: token 3005, name core330, request pid
> 2397
> 
> I had no success trying to get the 5.0.7 source package to actually
> link libtirpc, 
> no idea why.
>  
> Best Regards
> 
> Christof
> 
> On Fri, Apr 08, 2016 at 04:29:07PM +0200, Christof Koehler wrote:
> > Hello,
> > 
> > apparently I confused my 5.1.1 source built experiment and my debian
> > package rebuild experiment when I reported that libtirpc was used in
> > my
> > last email. So here is a new try to rebuild the deb source with
> > --with-libtirpc.
> > 
> > I did a apt-get source autofs and added --with-libtirpc to debian
> > rules.
> > After that it would of course not allow me to build a package,
> > "aborting
> > due to unexpected upstream changes". So I just did a "dpkg
> > -buildpackage
> > -b" and then dpkg -i autofs... . Attached is the file build.out.gz
> > which
> > contains the stdout output. Clearly libtirpc is used somehow in the
> > build.
> > 
> > After restoring maps in /etc I did a service restart autofs and with
> > debug loglevel I get 
> > 
> > Apr  8 16:20:33 core324 automount[14615]: open_mount:247:
> > parse(sun):
> > cannot open mount module nfs
> > (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined symbol:
> > clnt_dg_create)
> > 
> > as reported. I then double checked and actually
> > 
> > root@core324:~# ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so
> >         linux-vdso.so.1 =>  (0x00007ffff7ffd000)
> >         libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
> > (0x00007ffff79f3000)
> >         /lib64/ld-linux-x86-64.so.2 (0x0000555555554000)
> > 
> > no libtirpc. 
> > 
> > I will have to read up on how to properly rebuild the package. The
> > debian documentation is unfortunately not very user friendly, any
> > hints are appreciated.
> > 
> > Best Regards
> > 
> > Christof
> > 
> > On Fri, Apr 08, 2016 at 02:25:52PM +0200, Christof Koehler wrote:
> > > Hello again,
> > > > I've been thinking about this and I have a couple of thoughts.
> > > > 
> > > > As far a IPv6 goes using glibc RPC is, I think, not going to
> > > > work!
> > > > 
> > > > That's the first thing that needs to be sorted out.
> > > > 
> > > > I've been using libtirpc in Fedora and RHEL builds for nearly 10
> > > > years
> > > > so I don't think the library problem is with autofs.
> > > > 
> > > > This is an indication someone is doing something a little dumb:
> > > > 
> > > > automount[20444]: open_mount:247: parse(sun): cannot open mount
> > > > module
> > > > nfs (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined
> > > > symbol:
> > > > clnt_dg_create)
> > > 
> > > concerning my failures to build autofs. First the client has all
> > > libtirpc packages I think are necessary:
> > > # dpkg -l libtirpc\*|grep ii
> > > ii  libtirpc-dev                                   0.2.2-5ubuntu2
> > > ii  libtirpc1:amd64                                0.2.2-5ubuntu2
> > > 
> > > We have libtirpc1 on the machines by default and I had to
> > > install libtirpc-dev so that ./configure would conclude that
> > > --with-libtirpc should do anything. 
> > > 
> > > Actually I tried to compile autofs 5.1.1 from source and a new
> > > 5.0.7
> > > package from ubuntu's source deb.
> > > 
> > > Using the sources at 
> > > https://www.kernel.org/pub/linux/daemons/autofs/v5/
> > > I was basically confused what to do about the patches. Do I have
> > > to
> > > apply everything in patches-5.1.2 to autofs-5.1.1.tar.gz to get
> > > 5.1.2 ? 
> > > How do I do that automatically ? I noticed that autofs
> > > -5.1.1.tar.gz
> > > misses the patch mentioned in message 15 of
> > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737679
> > > but contained in autofs-5.1.1-revert-fix-libtirpc-name
> > > -clash.patch.
> > > 
> > > So to make it short I certainly messed something up
> > > somewhere, the final binary and libs were no success .
> > > Additionally
> > > installation did not play nice, although --prefix= was set it
> > > overwrote
> > > configuration files in /etc.  But I think I
> > > cleaned everything up afterwards.
> > > 
> > > If someone can provide some hints I would try it again.
> > > 
> > > After that I rebuild the 5.0.7 package from source deb after
> > > adding
> > > --with-libtirpc to debian/rules as suggested in the bug reports. I
> > > installed from that package.  I checked with ldd after installing
> > > that ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so was build
> > > with a
> > > reference to libtirpc. 
> > > 
> > > This try gave the error message in the ubuntu bug
> > > report 
> > > https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1564380
> > > 
> > > So, any hints are appreciated. As long as I can stick to 5.0.7
> > > rebuilt
> > > from the source deb installing/re-installing is no problem and I
> > > can try
> > > different things you might want. Assuming I can get the program to
> > > work :-)
> > > 
> > > Thank you very much for all your help !
> > > 
> > > 
> > > Best Regards
> > > 
> > > Christof
> > > 
> > > 
> > > 
> > > -- 
> > > Dr. rer. nat. Christof Köhler       email: 
> > > c.koehler@bccms.uni-bremen.de
> > > Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> > > Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> > > 28359 Bremen  
> > > 
> > > PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe autofs"
> > > in
> > 
> > -- 
> > Dr. rer. nat. Christof Köhler       email: 
> > c.koehler@bccms.uni-bremen.de
> > Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> > Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> > 28359 Bremen  
> > 
> > PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> 
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-08 16:15           ` Christof Koehler
@ 2016-04-10  2:17             ` Ian Kent
  0 siblings, 0 replies; 49+ messages in thread
From: Ian Kent @ 2016-04-10  2:17 UTC (permalink / raw)
  To: christof.koehler, autofs

On Fri, 2016-04-08 at 18:15 +0200, Christof Koehler wrote:
> OK, this actually fails to mount anything at all

That's right, as I say the RPC communication isn't failing in an
unexpected way so we aren't seeing any error messages.

About all that can be done is to add a patch that adds some extra
logging to try and get to the bottom of it.

> 
> Apr  8 18:12:53 core324 automount[963]: parse_mount: parse(sun): core
> of entry: options=fstype=nfs4,rw,intr,nosuid,soft,nodev,
> loc=core320:/locals
> Apr  8 18:12:53 core324 automount[963]: sun_mount: parse(sun):
> mounting root /local, mountpoint core320, what core320:/locals, fstype
> nfs4, options rw,intr,nosuid,soft,nodev
> Apr  8 18:12:53 core324 automount[963]: mount_mount: mount(nfs):
> root=/local name=core320 what=core320:/locals, fstype=nfs4,
> options=rw,intr,nosuid,soft,nodev
> Apr  8 18:12:53 core324 automount[963]: mount_mount: mount(nfs): nfs
> options="rw,intr,nosuid,soft,nodev", nobind=0, nosymlink=0, ro=0
> Apr  8 18:12:53 core324 automount[963]: get_nfs_info: called with host
> core320(192.168.220.70) proto 6 version 0x40
> Apr  8 18:12:53 core324 automount[963]: get_nfs_info: called with host
> core320(2001:638:708:1261:2000::70) proto 6 version 0x40
> Apr  8 18:12:53 core324 automount[963]: mount(nfs): no hosts available
> Apr  8 18:12:53 core324 automount[963]: dev_ioctl_send_fail: token =
> 3006
> Apr  8 18:12:53 core324 automount[963]: failed to mount /local/core320
> Apr  8 18:12:53 core324 automount[963]: handle_packet: type = 3
> 
> Maybe I am missing dependencies ... 
> 
> 
> On Fri, Apr 08, 2016 at 06:12:26PM +0200, Christof Koehler wrote:
> > Hello,
> > 
> > I managed to build a new autofs 5.1.1 from the ubuntu 16.04 source
> > package after throwing out all systemd dependencies. With that
> > 
> > root@core324:~/autofs-5.1.1# ldd /usr/lib/x86_64-linux
> > -gnu/autofs/mount_nfs.so
> >         linux-vdso.so.1 =>  (0x00007ffff7ffd000)
> >         libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
> > (0x00007ffff7b73000)
> >         libtirpc.so.1 => /lib/x86_64-linux-gnu/libtirpc.so.1
> > (0x00007ffff794b000)
> >         libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
> > (0x00007ffff7585000)
> >         /lib64/ld-linux-x86-64.so.2 (0x0000555555554000)
> >         libgssglue.so.1 => /lib/x86_64-linux-gnu/libgssglue.so.1
> > (0x00007ffff737b000)
> >         libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2
> > (0x00007ffff7177000)
> > 
> > However, I had to edit lib/rpc_subs.c as indicated in 
> > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737679, otherwise
> > it would not 
> > work (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined
> > symbol: auth_put).
> > 
> > With the new binary /local/core330 does not mount either, but the
> > error message is
> > probably an improvement:
> > 
> > Apr  8 18:05:25 core324 automount[963]: parse_mount: parse(sun):
> > dequote("core330:/locals"
> > ) -> core330:/locals
> > Apr  8 18:05:25 core324 automount[963]: parse_mount: parse(sun):
> > core of entry: options=fs
> > type=nfs4,rw,intr,nosuid,soft,nodev, loc=core330:/locals
> > Apr  8 18:05:25 core324 automount[963]: sun_mount: parse(sun):
> > mounting root /local, mount
> > point core330, what core330:/locals, fstype nfs4, options
> > rw,intr,nosuid,soft,nodev
> > Apr  8 18:05:25 core324 automount[963]: mount_mount: mount(nfs):
> > root=/local name=core330 
> > what=core330:/locals, fstype=nfs4, options=rw,intr,nosuid,soft,nodev
> > Apr  8 18:05:25 core324 automount[963]: mount_mount: mount(nfs): nfs
> > options="rw,intr,nosu
> > id,soft,nodev", nobind=0, nosymlink=0, ro=0
> > Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with
> > host core330(192.168.220
> > .118) proto 6 version 0x40
> > Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with
> > host core330(fd5f:852:a27c:1261:2000::118) proto 6 version 0x40
> > Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with
> > host core330(2001:638:708:1261:2000::118) proto 6 version 0x40
> > Apr  8 18:05:25 core324 automount[963]: mount(nfs): no hosts
> > available
> > Apr  8 18:05:25 core324 automount[963]: dev_ioctl_send_fail: token =
> > 3004
> > Apr  8 18:05:25 core324 automount[963]: failed to mount
> > /local/core330
> > Apr  8 18:05:25 core324 automount[963]: handle_packet: type = 3
> > Apr  8 18:05:25 core324 automount[963]:
> > handle_packet_missing_indirect: token 3005, name core330, request
> > pid 2397
> > 
> > I had no success trying to get the 5.0.7 source package to actually
> > link libtirpc, 
> > no idea why.
> >  
> > Best Regards
> > 
> > Christof
> > 
> > On Fri, Apr 08, 2016 at 04:29:07PM +0200, Christof Koehler wrote:
> > > Hello,
> > > 
> > > apparently I confused my 5.1.1 source built experiment and my
> > > debian
> > > package rebuild experiment when I reported that libtirpc was used
> > > in my
> > > last email. So here is a new try to rebuild the deb source with
> > > --with-libtirpc.
> > > 
> > > I did a apt-get source autofs and added --with-libtirpc to debian
> > > rules.
> > > After that it would of course not allow me to build a package,
> > > "aborting
> > > due to unexpected upstream changes". So I just did a "dpkg
> > > -buildpackage
> > > -b" and then dpkg -i autofs... . Attached is the file build.out.gz
> > > which
> > > contains the stdout output. Clearly libtirpc is used somehow in
> > > the build.
> > > 
> > > After restoring maps in /etc I did a service restart autofs and
> > > with
> > > debug loglevel I get 
> > > 
> > > Apr  8 16:20:33 core324 automount[14615]: open_mount:247:
> > > parse(sun):
> > > cannot open mount module nfs
> > > (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined symbol:
> > > clnt_dg_create)
> > > 
> > > as reported. I then double checked and actually
> > > 
> > > root@core324:~# ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so
> > >         linux-vdso.so.1 =>  (0x00007ffff7ffd000)
> > >         libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
> > > (0x00007ffff79f3000)
> > >         /lib64/ld-linux-x86-64.so.2 (0x0000555555554000)
> > > 
> > > no libtirpc. 
> > > 
> > > I will have to read up on how to properly rebuild the package. The
> > > debian documentation is unfortunately not very user friendly, any
> > > hints are appreciated.
> > > 
> > > Best Regards
> > > 
> > > Christof
> > > 
> > > On Fri, Apr 08, 2016 at 02:25:52PM +0200, Christof Koehler wrote:
> > > > Hello again,
> > > > > I've been thinking about this and I have a couple of thoughts.
> > > > > 
> > > > > As far a IPv6 goes using glibc RPC is, I think, not going to
> > > > > work!
> > > > > 
> > > > > That's the first thing that needs to be sorted out.
> > > > > 
> > > > > I've been using libtirpc in Fedora and RHEL builds for nearly
> > > > > 10 years
> > > > > so I don't think the library problem is with autofs.
> > > > > 
> > > > > This is an indication someone is doing something a little
> > > > > dumb:
> > > > > 
> > > > > automount[20444]: open_mount:247: parse(sun): cannot open
> > > > > mount module
> > > > > nfs (/usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so: undefined
> > > > > symbol:
> > > > > clnt_dg_create)
> > > > 
> > > > concerning my failures to build autofs. First the client has all
> > > > libtirpc packages I think are necessary:
> > > > # dpkg -l libtirpc\*|grep ii
> > > > ii  libtirpc-dev                                   0.2.2
> > > > -5ubuntu2
> > > > ii  libtirpc1:amd64                                0.2.2
> > > > -5ubuntu2
> > > > 
> > > > We have libtirpc1 on the machines by default and I had to
> > > > install libtirpc-dev so that ./configure would conclude that
> > > > --with-libtirpc should do anything. 
> > > > 
> > > > Actually I tried to compile autofs 5.1.1 from source and a new
> > > > 5.0.7
> > > > package from ubuntu's source deb.
> > > > 
> > > > Using the sources at 
> > > > https://www.kernel.org/pub/linux/daemons/autofs/v5/
> > > > I was basically confused what to do about the patches. Do I have
> > > > to
> > > > apply everything in patches-5.1.2 to autofs-5.1.1.tar.gz to get
> > > > 5.1.2 ? 
> > > > How do I do that automatically ? I noticed that autofs
> > > > -5.1.1.tar.gz
> > > > misses the patch mentioned in message 15 of
> > > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737679
> > > > but contained in autofs-5.1.1-revert-fix-libtirpc-name
> > > > -clash.patch.
> > > > 
> > > > So to make it short I certainly messed something up
> > > > somewhere, the final binary and libs were no success .
> > > > Additionally
> > > > installation did not play nice, although --prefix= was set it
> > > > overwrote
> > > > configuration files in /etc.  But I think I
> > > > cleaned everything up afterwards.
> > > > 
> > > > If someone can provide some hints I would try it again.
> > > > 
> > > > After that I rebuild the 5.0.7 package from source deb after
> > > > adding
> > > > --with-libtirpc to debian/rules as suggested in the bug reports.
> > > > I
> > > > installed from that package.  I checked with ldd after
> > > > installing
> > > > that ldd /usr/lib/x86_64-linux-gnu/autofs/mount_nfs.so was build
> > > > with a
> > > > reference to libtirpc. 
> > > > 
> > > > This try gave the error message in the ubuntu bug
> > > > report 
> > > > https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1564380
> > > > 
> > > > So, any hints are appreciated. As long as I can stick to 5.0.7
> > > > rebuilt
> > > > from the source deb installing/re-installing is no problem and I
> > > > can try
> > > > different things you might want. Assuming I can get the program
> > > > to work :-)
> > > > 
> > > > Thank you very much for all your help !
> > > > 
> > > > 
> > > > Best Regards
> > > > 
> > > > Christof
> > > > 
> > > > 
> > > > 
> > > > -- 
> > > > Dr. rer. nat. Christof Köhler       email: 
> > > > c.koehler@bccms.uni-bremen.de
> > > > Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> > > > Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> > > > 28359 Bremen  
> > > > 
> > > > PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe
> > > > autofs" in
> > > 
> > > -- 
> > > Dr. rer. nat. Christof Köhler       email: 
> > > c.koehler@bccms.uni-bremen.de
> > > Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> > > Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> > > 28359 Bremen  
> > > 
> > > PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> > 
> > 
> > 
> > -- 
> > Dr. rer. nat. Christof Köhler       email: 
> > c.koehler@bccms.uni-bremen.de
> > Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> > Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> > 28359 Bremen  
> > 
> > PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> > --
> > To unsubscribe from this list: send the line "unsubscribe autofs" in
> 
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-09  9:56           ` Christof Koehler
@ 2016-04-10  2:29             ` Ian Kent
  2016-04-25  4:40             ` Ian Kent
  1 sibling, 0 replies; 49+ messages in thread
From: Ian Kent @ 2016-04-10  2:29 UTC (permalink / raw)
  To: christof.koehler, autofs

On Sat, 2016-04-09 at 11:56 +0200, Christof Koehler wrote:
> Hello,
> 
> yes, indeed the 5.0.7 source deb rebuild is very strange. I also
> checked
> the libraries under debian/usr/lib/... in the build directory and they
> do not contain a libtirpc reference.
> 
> I can see two possible strategies:
> 1.  Concentrate on the 5.1.1 deb source I grabbed from ubuntu 16.04
> and the 
> deb package rebuild from that. Those libraries obviously contain a
> reference
> to libtirpc but do not mount anything. Too old libtirpc comes to mind
> then ?

An old libtirpc is a possibility.

And I think the best chance is to use 5.1.1 and possibly back port any
changes.

But I don't have an IPv6 environment I can use to help so that's a
problem straight away.

As I say, the only other thing we can do is add some targeted debug
logging and see if we can spot what is going wrong.

There have't been any changes to the RPC code since 5.1.1 (at least none
that I remember, I'll check later) so that should be a good source base
to use regardless of whether it's built from original source or deb
package.

I know there were some changes to libtirpc but I use a lower level of
communication and those changes formed the bulk of the changes I made to
autofs for the RPC communication. Maybe I missed some dependencies
myself, hard to tell really without knowing what's going on.

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-08 10:10   ` Ian Kent
  2016-04-08 10:14     ` Ian Kent
  2016-04-08 12:25     ` Christof Koehler
@ 2016-04-11  2:42     ` Ian Kent
  2016-04-11 16:32       ` Christof Koehler
  2 siblings, 1 reply; 49+ messages in thread
From: Ian Kent @ 2016-04-11  2:42 UTC (permalink / raw)
  To: christof.koehler, autofs

On Fri, 2016-04-08 at 18:10 +0800, Ian Kent wrote:
> On Fri, 2016-04-08 at 12:46 +0800, Ian Kent wrote:
> > On Thu, 2016-04-07 at 16:19 +0200, Christof Koehler wrote:
> > > Hello everybody,
> > > 
> > > I am on ubuntu 14.04 with autofs 5.0.7 and I observe an (for me)
> > > unexpected behaviour as detailed below. Apparently using autofs
> > > NFS4
> > > mounts fall back to using IPv4 addresses although valid IPv6
> > > addresses
> > > are available under certain circumstances, while a plain mount
> > > works
> > > as
> > > expected.
> > 
> > Can you provide a full debug log.
> > 
> > It might be autofs interfering with the mount but mount.nfs(8) is a
> > more
> > likely candidate.
> > 
> > > 
> > > Setup:
> > > ------
> > > Both, NFS server and client, are configured with an IPv4 address
> > > and
> > > an
> > > IPv6 GUA and IPv6 ULA. For brevity I will shorten the IPv4 address
> > > to
> > > 192, the GUA to 2001 and the ULA to fd5f  below. I will only
> > > change
> > > the
> > > DNS AAAA record in the following, the network configuration on
> > > server/client or the A records never change. Server and client
> > > have
> > > always working IPv4 and IPv6 GUA and ULA.
> > > 
> > > Test with mount:
> > > ----------------
> > > Using a plain "mount  -t nfs4 server:/locals /mnt/disk1/" on the
> > > client
> > > gives depending on the DNS entries for the server the expected
> > > source/target selection:
> > > 
> > > Server DNS entry|	client address used to mount
> > > 2001		|	2001
> > > fd5f		|	fd5f
> > > 2001+fd5f	|	fd5f
> > > 
> > > So in all cases RFC 6724/3484 is observed selecting the addresses.
> > > Please note that the server has two AAAA records (multi-homed) in
> > > the
> > > last test.
> > > 
> > > Test with autofs:
> > > -----------------
> > > A map lookup will yield "-fstype=nfs4,rw,intr,nosuid,soft,nodev
> > > server:/locals"
> > > for the mount. Now I change again the servers AAAA records with
> > > the
> > > following result:
> > > 
> > > Server DNS entry|	client address used to mount
> > > 2001		|	2001
> > > fd5f		|	fd5f
> > > 2001+fd5f	|	192
> > > 
> > > For a multi-homed NFS4 server autofs apparently falls back to IPv4
> > > although valid IPv6 options exist. As shown above just mounting
> > > without
> > > autofs would stick to RFC 6724/3484 instead. I believe that autofs
> > > should 
> > > also select fd5f ULAs in the multi-homed case.
> > > 
> > > Is this a known behaviour ? Do any workarounds exist ? I could not
> > > find
> > > anything.
> > > 
> > > I tried to compile autofs 5.1.1 with --with-libtirpc because
> > > of https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1101779 b
> > > ut
> > > could not get the binary to work. I filed a bug report for the
> > > behaviour
> > > described above 
> > > https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1564380
> > > but suspect that this is better suited for this list.
> 
> I've been thinking about this and I have a couple of thoughts.
> 
> As far a IPv6 goes using glibc RPC is, I think, not going to work!

I'm still trying to understand what happens here and I've only just now
looked at the logs you provided and compared to the source.

The first thing that stands out is that if libtirpc is not being used
all IPv6 hosts will be ignored because (my impression is that) glibc RPC
doesn't support IPv6.

Basically the get_proximity() function will return PROXIMITY_UNSUPPORTED
due to:

	case AF_INET6:
#ifndef WITH_LIBTIRPC
                return PROXIMITY_UNSUPPORTED;
#else

and the add_new_host() function will not add the host to the hosts list
due to:

        /*
         * If we tried to add an IPv6 address and we don't have IPv6
         * support return success in the hope of getting an IPv4
         * address later.
         */
        if (prx == PROXIMITY_UNSUPPORTED)
                return 1;

And the log doesn't give any hint as to what's happening.

The only thing that the log shows is that in 2001 autofs thinks there is
a single IPv4 address so it attempts to mount using the host name as it
should.

But in ipv4 autofs is using the IPv4 address which implies it thinks
there is more than one (IPv4) address for the host being considered so
it uses the IP address for the mount.

There's appears to be an autofs problem with this later case because (I
think) your description implies there is only one IPv4 address in all
cases.

Not only that, in both cases only one address has proximity calculated
for it which also implies there is only one address. So if there was no
mistake in the code it should also use the host name for the later
mount.

It's mount.nfs(8) mounting from the IPv6 address (when given a host name
not an address) in the former case and not autofs that's getting you an
IPv6 mount. And, AFAICS, the mount.nfs your using does use libtirpc.

Finally, if libtirpc is used there is no preference given to IPv6 over
IPv4 which may not be what is expected and might not be what should be
done.

But we can't really start to work out what needs to be done unless
libtirpc is being used.

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-11  2:42     ` Ian Kent
@ 2016-04-11 16:32       ` Christof Koehler
  2016-04-11 16:35         ` Christof Koehler
  0 siblings, 1 reply; 49+ messages in thread
From: Christof Koehler @ 2016-04-11 16:32 UTC (permalink / raw)
  To: autofs

Hello,

I will try to consolidate the answer in a single email without messing
up the quoting too much.

> But we can't really start to work out what needs to be done unless
> libtirpc is being used.

> And I think the best chance is to use 5.1.1 and possibly back port any
> changes.

> As I say, the only other thing we can do is add some targeted debug
> logging and see if we can spot what is going wrong.

I will then stick to the plan to defer this
until ubuntu 16.04 has been released which should contain autofs 5.1.1
and libtirpc 0.25.2. From the libtirpc git repo and the sourceforge page
I see that 0.25.2 is from 2014, but not even debian unstable is
using a newer version (and from the changelog things like gss_api
support are the main changes ?). 

I will use a virtual machine with ubuntu 16.04 (or if preferable debian 
testing or unstable) and then start experimenting. In that environment we 
can do to autofs and also libtirpc (and nfs-common if necessary) whatever 
is needed to get the necessary information. 

I will let you know when I start with that and what the baseline with
the original packages of the distribution and simple rebuild of autofs
--with-libtirpc is.

>> Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with host
>> core330(fd5f:852:a27c:1261:2000::118) proto 6 version 0x40
>> Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with host
>> core330(2001:638:708:1261:2000::118) proto 6 version 0x40
>> Apr  8 18:05:25 core324 automount[963]: mount(nfs): no hosts available

> Sadly that doesn't tell us much either, only that the rpc communication
> has failed to get a result in some expected way.

Well, at least it shows that this autofs built at least is somehow aware of
all the different IP adresses and trying the IPv4 one first. Of course
no other conclusions can be drawn from this.

> That's right, as I say the RPC communication isn't failing in an
> unexpected way so we aren't seeing any error messages.
> 
> About all that can be done is to add a patch that adds some extra
> logging to try and get to the bottom of it.

That should be possible with a virtual machine as mentioned above.

> The first thing that stands out is that if libtirpc is not being used
> all IPv6 hosts will be ignored because (my impression is that) glibc RPC
> doesn't support IPv6.

The libtirpc documentation says that libtirpc is needed for IPv6 ready
rpc support, http://nfsv4.bullopensource.org/doc/tirpc_rpcbind.php as
linked from sourceforge.
Might even be that glibc does actually no longer contain any rpc
functionality ?
https://archives.gentoo.org/gentoo-dev/message/186be8dc9753d18aafc9a5a616b3b991
This would be supported by information on the tirpc web page mentioned
above.

As far as I understand your analysis you are saying the IPv6
mount in the "2001" case is just working by accident, right ?

> It's mount.nfs(8) mounting from the IPv6 address (when given a host name
> not an address) in the former case and not autofs that's getting you an
> IPv6 mount. And, AFAICS, the mount.nfs your using does use libtirpc.
Yes.
# ldd /sbin/mount.nfs4|grep tirpc
        libtirpc.so.1 => /lib/x86_64-linux-gnu/libtirpc.so.1 (0x00007ffff7d9d000)
and libtirpc is a hard package dependency of nfs-common.

Best Regards

Christof
-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-11 16:32       ` Christof Koehler
@ 2016-04-11 16:35         ` Christof Koehler
  2016-04-12  1:07           ` Ian Kent
  0 siblings, 1 reply; 49+ messages in thread
From: Christof Koehler @ 2016-04-11 16:35 UTC (permalink / raw)
  To: autofs

Oops, replace 0.25.2 by 0.2.5 ... Sorry.

On Mon, Apr 11, 2016 at 06:32:41PM +0200, Christof Koehler wrote:
> Hello,
> 
> I will try to consolidate the answer in a single email without messing
> up the quoting too much.
> 
> > But we can't really start to work out what needs to be done unless
> > libtirpc is being used.
> 
> > And I think the best chance is to use 5.1.1 and possibly back port any
> > changes.
> 
> > As I say, the only other thing we can do is add some targeted debug
> > logging and see if we can spot what is going wrong.
> 
> I will then stick to the plan to defer this
> until ubuntu 16.04 has been released which should contain autofs 5.1.1
> and libtirpc 0.25.2. From the libtirpc git repo and the sourceforge page
> I see that 0.25.2 is from 2014, but not even debian unstable is
> using a newer version (and from the changelog things like gss_api
> support are the main changes ?). 
> 
> I will use a virtual machine with ubuntu 16.04 (or if preferable debian 
> testing or unstable) and then start experimenting. In that environment we 
> can do to autofs and also libtirpc (and nfs-common if necessary) whatever 
> is needed to get the necessary information. 
> 
> I will let you know when I start with that and what the baseline with
> the original packages of the distribution and simple rebuild of autofs
> --with-libtirpc is.
> 
> >> Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with host
> >> core330(fd5f:852:a27c:1261:2000::118) proto 6 version 0x40
> >> Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called with host
> >> core330(2001:638:708:1261:2000::118) proto 6 version 0x40
> >> Apr  8 18:05:25 core324 automount[963]: mount(nfs): no hosts available
> 
> > Sadly that doesn't tell us much either, only that the rpc communication
> > has failed to get a result in some expected way.
> 
> Well, at least it shows that this autofs built at least is somehow aware of
> all the different IP adresses and trying the IPv4 one first. Of course
> no other conclusions can be drawn from this.
> 
> > That's right, as I say the RPC communication isn't failing in an
> > unexpected way so we aren't seeing any error messages.
> > 
> > About all that can be done is to add a patch that adds some extra
> > logging to try and get to the bottom of it.
> 
> That should be possible with a virtual machine as mentioned above.
> 
> > The first thing that stands out is that if libtirpc is not being used
> > all IPv6 hosts will be ignored because (my impression is that) glibc RPC
> > doesn't support IPv6.
> 
> The libtirpc documentation says that libtirpc is needed for IPv6 ready
> rpc support, http://nfsv4.bullopensource.org/doc/tirpc_rpcbind.php as
> linked from sourceforge.
> Might even be that glibc does actually no longer contain any rpc
> functionality ?
> https://archives.gentoo.org/gentoo-dev/message/186be8dc9753d18aafc9a5a616b3b991
> This would be supported by information on the tirpc web page mentioned
> above.
> 
> As far as I understand your analysis you are saying the IPv6
> mount in the "2001" case is just working by accident, right ?
> 
> > It's mount.nfs(8) mounting from the IPv6 address (when given a host name
> > not an address) in the former case and not autofs that's getting you an
> > IPv6 mount. And, AFAICS, the mount.nfs your using does use libtirpc.
> Yes.
> # ldd /sbin/mount.nfs4|grep tirpc
>         libtirpc.so.1 => /lib/x86_64-linux-gnu/libtirpc.so.1 (0x00007ffff7d9d000)
> and libtirpc is a hard package dependency of nfs-common.
> 
> Best Regards
> 
> Christof
> -- 
> Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
> Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> 28359 Bremen  
> 
> PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> --
> To unsubscribe from this list: send the line "unsubscribe autofs" in

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-11 16:35         ` Christof Koehler
@ 2016-04-12  1:07           ` Ian Kent
  0 siblings, 0 replies; 49+ messages in thread
From: Ian Kent @ 2016-04-12  1:07 UTC (permalink / raw)
  To: christof.koehler, autofs

On Mon, 2016-04-11 at 18:35 +0200, Christof Koehler wrote:
> Oops, replace 0.25.2 by 0.2.5 ... Sorry.
> 
> On Mon, Apr 11, 2016 at 06:32:41PM +0200, Christof Koehler wrote:
> > Hello,
> > 
> > I will try to consolidate the answer in a single email without
> > messing
> > up the quoting too much.
> > 
> > > But we can't really start to work out what needs to be done unless
> > > libtirpc is being used.
> > 
> > > And I think the best chance is to use 5.1.1 and possibly back port
> > > any
> > > changes.
> > 
> > > As I say, the only other thing we can do is add some targeted
> > > debug
> > > logging and see if we can spot what is going wrong.
> > 
> > I will then stick to the plan to defer this
> > until ubuntu 16.04 has been released which should contain autofs
> > 5.1.1
> > and libtirpc 0.25.2. From the libtirpc git repo and the sourceforge
> > page
> > I see that 0.25.2 is from 2014, but not even debian unstable is
> > using a newer version (and from the changelog things like gss_api
> > support are the main changes ?). 
> > 
> > I will use a virtual machine with ubuntu 16.04 (or if preferable
> > debian 
> > testing or unstable) and then start experimenting. In that
> > environment we 
> > can do to autofs and also libtirpc (and nfs-common if necessary)
> > whatever 
> > is needed to get the necessary information. 
> > 
> > I will let you know when I start with that and what the baseline
> > with
> > the original packages of the distribution and simple rebuild of
> > autofs
> > --with-libtirpc is.
> > 
> > > > Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called
> > > > with host
> > > > core330(fd5f:852:a27c:1261:2000::118) proto 6 version 0x40
> > > > Apr  8 18:05:25 core324 automount[963]: get_nfs_info: called
> > > > with host
> > > > core330(2001:638:708:1261:2000::118) proto 6 version 0x40
> > > > Apr  8 18:05:25 core324 automount[963]: mount(nfs): no hosts
> > > > available
> > 
> > > Sadly that doesn't tell us much either, only that the rpc
> > > communication
> > > has failed to get a result in some expected way.
> > 
> > Well, at least it shows that this autofs built at least is somehow
> > aware of
> > all the different IP adresses and trying the IPv4 one first. Of
> > course
> > no other conclusions can be drawn from this.
> > 
> > > That's right, as I say the RPC communication isn't failing in an
> > > unexpected way so we aren't seeing any error messages.
> > > 
> > > About all that can be done is to add a patch that adds some extra
> > > logging to try and get to the bottom of it.
> > 
> > That should be possible with a virtual machine as mentioned above.
> > 
> > > The first thing that stands out is that if libtirpc is not being
> > > used
> > > all IPv6 hosts will be ignored because (my impression is that)
> > > glibc RPC
> > > doesn't support IPv6.
> > 
> > The libtirpc documentation says that libtirpc is needed for IPv6
> > ready
> > rpc support, http://nfsv4.bullopensource.org/doc/tirpc_rpcbind.php a
> > s
> > linked from sourceforge.
> > Might even be that glibc does actually no longer contain any rpc
> > functionality ?
> > https://archives.gentoo.org/gentoo-dev/message/186be8dc9753d18aafc9a
> > 5a616b3b991
> > This would be supported by information on the tirpc web page
> > mentioned
> > above.
> > 
> > As far as I understand your analysis you are saying the IPv6
> > mount in the "2001" case is just working by accident, right ?

Basically, yes.

The code that determines if there are multiple addresses for a host has
a mistake in it. If it was working correctly I think you would have seen
both mounts as IPv4.

Fixing that mistake is possibly going to be part of a larger change for
this so I'll leave it for now.

> > 
> > > It's mount.nfs(8) mounting from the IPv6 address (when given a
> > > host name
> > > not an address) in the former case and not autofs that's getting
> > > you an
> > > IPv6 mount. And, AFAICS, the mount.nfs your using does use
> > > libtirpc.
> > Yes.
> > # ldd /sbin/mount.nfs4|grep tirpc
> >         libtirpc.so.1 => /lib/x86_64-linux-gnu/libtirpc.so.1
> > (0x00007ffff7d9d000)
> > and libtirpc is a hard package dependency of nfs-common.
> > 
> > Best Regards
> > 
> > Christof
> > -- 
> > Dr. rer. nat. Christof Köhler       email: 
> > c.koehler@bccms.uni-bremen.de
> > Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> > Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> > 28359 Bremen  
> > 
> > PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> > --
> > To unsubscribe from this list: send the line "unsubscribe autofs" in
> 
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-09  9:56           ` Christof Koehler
  2016-04-10  2:29             ` Ian Kent
@ 2016-04-25  4:40             ` Ian Kent
  2016-04-25 15:06               ` Christof Koehler
  1 sibling, 1 reply; 49+ messages in thread
From: Ian Kent @ 2016-04-25  4:40 UTC (permalink / raw)
  To: christof.koehler, autofs

On Sat, 2016-04-09 at 11:56 +0200, Christof Koehler wrote:
> Hello,
> 
> yes, indeed the 5.0.7 source deb rebuild is very strange. I also
> checked
> the libraries under debian/usr/lib/... in the build directory and they
> do not contain a libtirpc reference.

I had a look at this and found a few things.

The 5.0.7 Ubuntu build is sensitive to the library link order, seems
sensible enough, although I don't understand why I don't see this
problem with Fedora.

Adding the dependency to the build and adding a patch to fix the
Makefiles link order gets binaries with the libtirpc dependency.

The clean target of the Makefile in the modules directory doesn't remove
the yacc generated files and causes the Ubuntu build system to complain
on a second an subsequent runs of the build.

The autofs-5.0.6-fix-libtirpc-name-clash.patch of 5.0.7 needs to be
reverted in 14.04.4 for both the 5.0.7 and 5.1.1 builds to get rid of
the get_auth problem. That's due to the old libtirpc.

Once built the resulting package always fails when probing proximity and
availability (ie. calling into libtirpc).

The same thing happens with the 5.1.1 built on 14.04.4 with the above
changes.

I added the Makefile link order change, the Makefile clean change, and
added the libtirpc dependency to a Ubuntu 16.04 install and built the
package.

I tried a couple of simple indirect mounts and it was able to mount
them, so it seems the 14.04.4 problem is due to the old libtirpc
version.

But it SEGVed in libtirpc when using a -hosts mount.
I had a look at that and can't see any reason for the SEGV so that's a
puzzle.

I use standard tirpc calls and they function fine in Fedora, so the
seemingly innocent calls where it crashes are very strange.

If there was something that looked remotely like it could cause a
problem at least I'd have something to follow up on.

This isn't good and I'm not sure where to go from here.

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-25  4:40             ` Ian Kent
@ 2016-04-25 15:06               ` Christof Koehler
  2016-04-26  1:06                 ` Ian Kent
  0 siblings, 1 reply; 49+ messages in thread
From: Christof Koehler @ 2016-04-25 15:06 UTC (permalink / raw)
  To: Ian Kent; +Cc: autofs

Hello,

that is bad news overall, but thank you very much for testing and trying
it.  I had just installed 16.04 into a vm last week and would have made
first attempts with autofs this week. So you saved me quite some work
discovering that it is broken in an interesting way (I assume the SEGV
refers to 16.04).

A very complicated and confusing situation. This is basically the same 
I was in when I first asked about IPv6 support on this list. Not knowing 
what to do and whom to ask about what in which order.

I fear I can be of no actual help with the segfault, I am not a 
C programmer at all.  

However, if I understand the difference between direct and
indirect maps correctly what I am using is an indirect map ? The
auto.master line is "/local  /etc/auto.local --timeout=150" and
/etc/auto.local is an executable map which prints [1]
# /etc/auto.local core330
-fstype=nfs4,rw,intr,nosuid,soft,nodev core330:/locals

So re-building and testing autofs on 16.04 with IPv6 could still be
useful assuming the segfault is a separable problem (which might not be
the case of course) ?

I can also offer to try to do it with debian testing in a vm. If it also
segfaults I could file a bug report hoping to get the maintainer
interested, especially if there are no problems with fedora. Shall I try
that ? Instead or in addition to the test with IPv6 mentioned above ?

As far as I can see the ubuntu stuff is an import from debian
testing with no actual maintainer.

Best Regards

Christof

[1] I am using executable maps to force a bind mount if accessing
something in /local/$hostname from the same machine $hostname, i.e. on
core330 a call to /etc/auto.local prints "-fstype=bind
:/nfs4exports/locals". Without autofs would use nfs4 even if a faster 
and light-weight bind mount would do the same job.

On Mon, Apr 25, 2016 at 12:40:48PM +0800, Ian Kent wrote:
> On Sat, 2016-04-09 at 11:56 +0200, Christof Koehler wrote:
> > Hello,
> > 
> > yes, indeed the 5.0.7 source deb rebuild is very strange. I also
> > checked
> > the libraries under debian/usr/lib/... in the build directory and they
> > do not contain a libtirpc reference.
> 
> I had a look at this and found a few things.
> 
> The 5.0.7 Ubuntu build is sensitive to the library link order, seems
> sensible enough, although I don't understand why I don't see this
> problem with Fedora.
> 
> Adding the dependency to the build and adding a patch to fix the
> Makefiles link order gets binaries with the libtirpc dependency.
> 
> The clean target of the Makefile in the modules directory doesn't remove
> the yacc generated files and causes the Ubuntu build system to complain
> on a second an subsequent runs of the build.
> 
> The autofs-5.0.6-fix-libtirpc-name-clash.patch of 5.0.7 needs to be
> reverted in 14.04.4 for both the 5.0.7 and 5.1.1 builds to get rid of
> the get_auth problem. That's due to the old libtirpc.
> 
> Once built the resulting package always fails when probing proximity and
> availability (ie. calling into libtirpc).
> 
> The same thing happens with the 5.1.1 built on 14.04.4 with the above
> changes.
> 
> I added the Makefile link order change, the Makefile clean change, and
> added the libtirpc dependency to a Ubuntu 16.04 install and built the
> package.
> 
> I tried a couple of simple indirect mounts and it was able to mount
> them, so it seems the 14.04.4 problem is due to the old libtirpc
> version.
> 
> But it SEGVed in libtirpc when using a -hosts mount.
> I had a look at that and can't see any reason for the SEGV so that's a
> puzzle.
> 
> I use standard tirpc calls and they function fine in Fedora, so the
> seemingly innocent calls where it crashes are very strange.
> 
> If there was something that looked remotely like it could cause a
> problem at least I'd have something to follow up on.
> 
> This isn't good and I'm not sure where to go from here.
> 
> Ian
> --
> To unsubscribe from this list: send the line "unsubscribe autofs" in

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-25 15:06               ` Christof Koehler
@ 2016-04-26  1:06                 ` Ian Kent
  2016-04-26  9:53                   ` Ian Kent
  0 siblings, 1 reply; 49+ messages in thread
From: Ian Kent @ 2016-04-26  1:06 UTC (permalink / raw)
  To: christof.koehler; +Cc: autofs

On Mon, 2016-04-25 at 17:06 +0200, Christof Koehler wrote:
> Hello,
> 
> that is bad news overall, but thank you very much for testing and
> trying
> it.  I had just installed 16.04 into a vm last week and would have
> made
> first attempts with autofs this week. So you saved me quite some work
> discovering that it is broken in an interesting way (I assume the SEGV
> refers to 16.04).

Yep, that's right.

The point is there will almost certainly be changes needed even for it
to build and produce binaries with proper dependencies let alone
anything that might be needed for the IPv6 functionality.

btw, I expect there will be changes needed in the IPv6 area because when
that was done there were quite a number of things that I wasn't sure
about and guessed at or that I didn't know I didn't know, ;)

Anyway, I will check the Makefile changes don't interfere with anything
else and have a think about what's really going on then most likely
commit them to the repo.

The Makefile cleanup patch is straight forward, that'll go straight in.

It will also be a while before my next commit and push to the repo. and
longer before they are available in a release, so I'll need to provide
patches if the Ubuntu folks can be persuaded to update the package.

That's probably not as simple as it sounds for a couple of reasons to do
with the Ubuntu build subsystem, how they want things to be in it and
that the autofs build is antiquated. Basically it barely uses the
autohell tools which could be part of the problem (or not, it's already
hard enough to make changes with the simple make file setup I have).

> A very complicated and confusing situation. This is basically the same
> I was in when I first asked about IPv6 support on this list. Not
> knowing 
> what to do and whom to ask about what in which order.
> 
> I fear I can be of no actual help with the segfault, I am not a 
> C programmer at all. 

It's that much more puzzling when I can see what I'm doing is just about
identical to showmount(8) which functions as it should but autofs
doesn't.

The main difference is the RPC client creation that happens before that
and that's been ok for a while now. Unfortunately the client creation is
rather complicated because I need to control the timeouts and minimize
reserved port usage.

>  
> 
> However, if I understand the difference between direct and
> indirect maps correctly what I am using is an indirect map ? The
> auto.master line is "/local  /etc/auto.local --timeout=150" and
> /etc/auto.local is an executable map which prints [1]
> # /etc/auto.local core330
> -fstype=nfs4,rw,intr,nosuid,soft,nodev core330:/locals

Yep, that's a fairly simple indirect mount.

> 
> So re-building and testing autofs on 16.04 with IPv6 could still be
> useful assuming the segfault is a separable problem (which might not
> be
> the case of course) ?

That most likely will function ok from what I saw.

Even more surprising is the proximity and availability probe code uses
the same client creation as the code that has the problem and it seems
to work ....

Anyway I can cleanup what I have and send over what's needed to build
the debs when your ready.

> 
> 
> I can also offer to try to do it with debian testing in a vm. If it
> also
> segfaults I could file a bug report hoping to get the maintainer
> interested, especially if there are no problems with fedora. Shall I
> try
> that ? Instead or in addition to the test with IPv6 mentioned above ?

Not sure how to go about this.
I think a couple of changes are needed but fiddling with LDFLAGS
presence and position as I did might not be in line with what the Debian
folks want.

I suggest we just go along and see what we come up with.

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-26  1:06                 ` Ian Kent
@ 2016-04-26  9:53                   ` Ian Kent
  2016-04-26 15:27                     ` Christof Koehler
  0 siblings, 1 reply; 49+ messages in thread
From: Ian Kent @ 2016-04-26  9:53 UTC (permalink / raw)
  To: christof.koehler; +Cc: autofs

On Tue, 2016-04-26 at 09:06 +0800, Ian Kent wrote:
> > So re-building and testing autofs on 16.04 with IPv6 could still be
> > useful assuming the segfault is a separable problem (which might not
> > be
> > the case of course) ?
> 
> That most likely will function ok from what I saw.
> 
> Even more surprising is the proximity and availability probe code uses
> the same client creation as the code that has the problem and it seems
> to work ....
> 
> Anyway I can cleanup what I have and send over what's needed to build
> the debs when your ready.
> 
> > 
> > 
> > I can also offer to try to do it with debian testing in a vm. If it
> > also
> > segfaults I could file a bug report hoping to get the maintainer
> > interested, especially if there are no problems with fedora. Shall I
> > try
> > that ? Instead or in addition to the test with IPv6 mentioned above
> > ?
> 
> Not sure how to go about this.
> I think a couple of changes are needed but fiddling with LDFLAGS
> presence and position as I did might not be in line with what the
> Debian
> folks want.
> 
> I suggest we just go along and see what we come up with.

I spent a little more time on this.

I used a smallish sledge hammer approach, building an updated libtirpc
and building dependencies I'm aware of against it, nfs-common and
rpcbind as well as autofs 5.1.1.

I just went straight to libtirpc 1.0.1.

The resulting autofs package didn't show the problem I saw with the
internal hosts map.

So I'd have to say this looks like a library version problem after all.

It seems to me that spending some time to work out how to provide a
lanchpad ppa is probably the best way to get you to a position to test
the IPv6 functionality.

That also means that in time there could be a 14.04 build too, not sure
about that though.

I can't spend more time on this for a little while so I'll need some
time to work out how to do the ppa thing, if in fact I can.

I guess the other thing to worry about is if doing this will annoy the
official downstream maintainer ... maybe we should attempt to make
contact some time.

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-26  9:53                   ` Ian Kent
@ 2016-04-26 15:27                     ` Christof Koehler
  2016-04-27  1:54                       ` Ian Kent
  0 siblings, 1 reply; 49+ messages in thread
From: Christof Koehler @ 2016-04-26 15:27 UTC (permalink / raw)
  To: Ian Kent; +Cc: autofs

Hello,

this appears to be a matryoshka of problems.

> 
> The resulting autofs package didn't show the problem I saw with the
> internal hosts map.
> 
> So I'd have to say this looks like a library version problem after all.
OK, thank you for looking into this.

> 
> It seems to me that spending some time to work out how to provide a
> lanchpad ppa is probably the best way to get you to a position to test
> the IPv6 functionality.
I will configure that 16.04 vm this week and integrate it into the
network as a "first class" workstation. It should be ready as a testbed
(including snapshot capability) then for whenever it is actually needed.

While I am at it I will see that basic IPv6 tests (with and without
libtirpc) are done in any case.

> 
> That also means that in time there could be a 14.04 build too, not sure
> about that though.
We will move our workstations to 16.04.1 after it is released (provided 
it has no show stoppers), so 14.04 is not important at all. Do not bother 
with it.

> 
> I can't spend more time on this for a little while so I'll need some
> time to work out how to do the ppa thing, if in fact I can.
I am not sure about it. If it is easier for you to provide patches I can also
recompile. Or I could provide a possibility to upload packages, the
University has its own file hosting service (think dropbox). If I am 
installing binary or source packages I have to trust their origin anyway.
Whatever is easiest for you.

In any case take your time ! We await delivery of a new mid-size
HPC cluster in May (completely unrelated to autofs). As soon as it is 
here I will be absorbed in configuration and migration for some time.

> 
> I guess the other thing to worry about is if doing this will annoy the
> official downstream maintainer ... maybe we should attempt to make
> contact some time.
> 


Best Regards

Christof


-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-26 15:27                     ` Christof Koehler
@ 2016-04-27  1:54                       ` Ian Kent
  2016-04-27  2:27                         ` Ian Kent
  2016-04-27 16:52                         ` Christof Koehler
  0 siblings, 2 replies; 49+ messages in thread
From: Ian Kent @ 2016-04-27  1:54 UTC (permalink / raw)
  To: christof.koehler; +Cc: autofs

On Tue, 2016-04-26 at 17:27 +0200, Christof Koehler wrote:
> Hello,
> 
> this appears to be a matryoshka of problems.

LOL, I had to look matryoshka up in the dictionary.

Perhaps but it is difficult for downstream maintainers to be up with
what's happening with packages especially when there are quite a few
changes over time.

Not only that, when those changes silently depend on other libraries
it's easy to get caught.
 
> 
> > 
> > The resulting autofs package didn't show the problem I saw with the
> > internal hosts map.
> > 
> > So I'd have to say this looks like a library version problem after
> > all.
> OK, thank you for looking into this.
> 
> > 
> > It seems to me that spending some time to work out how to provide a
> > lanchpad ppa is probably the best way to get you to a position to
> > test
> > the IPv6 functionality.
> I will configure that 16.04 vm this week and integrate it into the
> network as a "first class" workstation. It should be ready as a
> testbed
> (including snapshot capability) then for whenever it is actually
> needed.
> 
> While I am at it I will see that basic IPv6 tests (with and without
> libtirpc) are done in any case.

That will be useful.

IPv6 mounts should work, that's what we need to check first, of course.

There's been fairly limited use of the IPv6 code. I've tried to fix
things that have been reported but there are still things that need
checking.

For example, network proximity.

Because autofs provides interactive access to mounts there needs to be a
way to avoid trying to mount from hosts that are not responding and
there are some rules about multi-host selection relating to proximity
and response time.

There's also reserved port exhaustion difficulties that is increased due
to the need to calculate proximity and response time.

That's why I need to use lower level RPC calls for these tasks and
that's where the problem we have here originates.

When I wrote the IPv6 code I wasn't sure how to make the IPv6 proximity
relate to the IPv4 proximity. It should be the same for all addresses of
a multi-homed host but it might not be.

So we could have the situation where, if there are both IPv6 and IPv4
addresses for a host, one is always preferred over the other.

Then there's the question of order of addresses to try.
Should IPv6 addresses be higher priority than IPv4?

Should I even use IPv4 addresses when IPv6 addresses are present and
fall back to IPv4 if all IPv6 addresses fail?

If there is more than one distinct host and a host with only an IPv4
address is "closer" (that's the proximity question and also response
time) than another host with only an IPv6 address, what selection policy
should I use?

These probably aren't going to all be covered by us but perhaps it will
give me some more insight into the questions above.

I haven't looked at the code yet so maybe it isn't the problem I think
it might be.

And setting up a IPv6 test environment is difficult so having someone
that needs this is useful.

Thanks fr doing this.

> 
> > 
> > That also means that in time there could be a 14.04 build too, not
> > sure
> > about that though.
> We will move our workstations to 16.04.1 after it is released
> (provided 
> it has no show stoppers), so 14.04 is not important at all. Do not
> bother 
> with it.

Too late, it's already done, ;)

> 
> > 
> > I can't spend more time on this for a little while so I'll need some
> > time to work out how to do the ppa thing, if in fact I can.
> I am not sure about it. If it is easier for you to provide patches I
> can also
> recompile. Or I could provide a possibility to upload packages, the
> University has its own file hosting service (think dropbox). If I am 
> installing binary or source packages I have to trust their origin
> anyway.
> Whatever is easiest for you.

Providing packages via a ppa seems straight forward enough.

I need to remove all the packages and install from the ppa for each
release to check they work (once they have been built by the launchpad
system).

I really need to leave that for a little bit later though.

> 
> In any case take your time ! We await delivery of a new mid-size
> HPC cluster in May (completely unrelated to autofs). As soon as it is 
> here I will be absorbed in configuration and migration for some time.

LOL, you won't have much time once you get into that, ;)

Anyway, I'll reply with the ppa instructions once I've checked they work
ok.

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-27  1:54                       ` Ian Kent
@ 2016-04-27  2:27                         ` Ian Kent
  2016-04-27 16:52                         ` Christof Koehler
  1 sibling, 0 replies; 49+ messages in thread
From: Ian Kent @ 2016-04-27  2:27 UTC (permalink / raw)
  To: christof.koehler; +Cc: autofs

On Wed, 2016-04-27 at 09:54 +0800, Ian Kent wrote:
> On Tue, 2016-04-26 at 17:27 +0200, Christof Koehler wrote:
> > 
> > While I am at it I will see that basic IPv6 tests (with and without
> > libtirpc) are done in any case.
> 
> That will be useful.

Don't forget that the glibc RPC implementation shouldn't work with IPv6
and autofs relies on RPC calls for it's proximity and availability
checks so IPv6 without libtirpc probably won't work and that's what we
have seen so far.

You can set some specific autofs configuration options to prevent the
proximity and availability checks for most cases in which case the host
name would be passed on to mount.nfs(8) and work ok for IPv6 mounting to
the extent that mount.nfs(8) does.

But that's not really testing autofs IPv6 handling though.

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-27  1:54                       ` Ian Kent
  2016-04-27  2:27                         ` Ian Kent
@ 2016-04-27 16:52                         ` Christof Koehler
  2016-04-28  2:56                           ` Ian Kent
  1 sibling, 1 reply; 49+ messages in thread
From: Christof Koehler @ 2016-04-27 16:52 UTC (permalink / raw)
  To: Ian Kent; +Cc: autofs

[-- Attachment #1: Type: text/plain, Size: 6053 bytes --]

Hello,

I have the system running and did first tests. This was 
interesting although I observed basically the behaviour you were 
expecting.

From what I see without libtirpc (standard package), autofs is as
expected oblivious to IPv6. It passes the servers hostname to mount, which 
in the case of IPv4 and a single IPv6 address in the end uses the IPv6 one. 
In the case of two IPv6 addresses it falls back to IPv4 for some reason. So 
that is unrelated to autofs then (and I should ask somewhere else) ? I 
attached two syslog ouputs (debug level) covering the single IPv6 
(single_log_wo_tirpc.txt.gz) and double IPv6 (multi_log_wo_tirpc.txt.gz) cases.

I note, though, that mount on its own (mount -tnfs4 core330:/locals ...)
always picks one of the IPv6 addresses and never the IPv4 address if
both IPv6 addresses are in the DNS. So there is some difference to the
behaviour when called from autofs. Do you have an idea what that might
be or what I can do to find that out ?

I rebuilt the stock package "--with-libtirpc" (after removing the
problematic #ifdef block from rpc_subs.c) against the systems libtirpc
0.2.5. Then I tested again the two cases mentioned above, log output is
attached as single_log_tirpc.txt.gz and multi_log_tirpc.txt.gz.

In the "single" case the IPv6 address is always used as far as I could see
(about 10 tries, fewer in log). The response time is apparently not used
e.g. (see single_log_tirpc.txt.gz)
Apr 27 16:23:33 core400 automount[2473]: get_nfs_info: nfs v4 rpc ping
time: 0.000115
Apr 27 16:23:33 core400 automount[2473]: get_nfs_info: nfs v4 rpc ping
time: 0.000153
results in
Apr 27 16:23:33 core400 automount[2473]: mount_mount: mount(nfs):
calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev core330:/locals /local/core330
which then apparently decides to use the IPv6 address in its own.

In the double IPv6 address case I see that all available addresses 
(IPv4 192.168.220.118, IPv6 GUA 2001:...:118 and IPv6 ULA fd5f:...:118) are used 
to mount and that actual IP addresses are passed to mount instead of a 
hostname (see single example above for the opposite behaviour). The
choice of address is a direct result of the response time as you mentioned. 

It is unclear to me how it decides if an IP address or an
hostname should be passed to mount, might this be multi-homed or
failover behaviour ? But then even with IPv4 and a single IPv6 address the host
sould be considered multi-homed for that purpose and not only if it has
multiple IPv6 addresses ?

Or is it working as designed ?

Now some opinions before some package rebuild and technical questions:

On Wed, Apr 27, 2016 at 09:54:38AM +0800, Ian Kent wrote:
> 
> Then there's the question of order of addresses to try.
> Should IPv6 addresses be higher priority than IPv4?

I now see and understand the question. The behaviour of mount (stand
alone or called from autofs) is puzzling in this context. Stand alone
mount does IMHO the right thing:

I always assumed that there was an agreed preference, e.g. observing 
for http/ssh connections that IPv6 in fact is preferred before
eventually falling back to IPv4 (perhaps after a timeout).

RFC 6724's Abstract says: 
"In dual-stack implementations, the destination address
selection algorithm can consider both IPv4 and IPv6 addresses --
depending on the available source addresses, the algorithm might
prefer IPv6 addresses over IPv4 addresses, or vice versa."

and in Section 10.3.
"The default policy table gives IPv6 addresses higher precedence than
IPv4 addresses.  This means that applications will use IPv6 in
preference to IPv4 when the two are equally suitable."

This is what /etc/gai.conf (source selection) and "ip addrlabel"
defaults (destination selection) are based on. But may be I am
overinterpreting the RFC considering its Abstract.

As far as I read earlier this preference caused/can cause a great deal of 
pain sometimes.


> 
> Should I even use IPv4 addresses when IPv6 addresses are present and
> fall back to IPv4 if all IPv6 addresses fail?
Fallback after failure might be reasonable. But I agree that there will be
opinions and my use case is not a general or really complicated one, so
I will abstain.

> 
> If there is more than one distinct host and a host with only an IPv4
> address is "closer" (that's the proximity question and also response
> time) than another host with only an IPv6 address, what selection policy
> should I use?
Yes, I see the point. This is obviously out of the RFC's scope.

One could argue for a configuration file or compile time option(s) to
influence address selection, but you know pros and cons of that and I do
not. Especially considering that (some) distros are not even using libtirpc
to begin with.


Now the package rebuild questions:
You built libtirpc 1.0.1 from the sourceforge source, put include and
library files in the system locations by hand and then recompiled the
autofs package against these ? Or is there a neat trick to avoid messing
in the system locations by hand ? 
I am unsure how to reproduce what you did.

Another question: What is your expectation in a situation where only
IPv6 is available, no IPv4 for the server ? Is  the one mentioned in #25 of
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737679
expected to be still a working solution ? I might have an NFS server soon
which, due to a routing conflict difficult to resolve, would only get
an IPv6 address visible to the clients. I should test this case with a 
dedicated vm as server anyway ...

> 
> And setting up a IPv6 test environment is difficult so having someone
> that needs this is useful.
> Thanks fr doing this.
> 

I am glad to be able to help with testing. And of course thank You for taking
the time to help me !


Best Regards

Christof



-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/

[-- Attachment #2: multi_log_wo_tirpc.txt.gz --]
[-- Type: application/octet-stream, Size: 2185 bytes --]

[-- Attachment #3: multi_log_tirpc.txt.gz --]
[-- Type: application/octet-stream, Size: 4301 bytes --]

[-- Attachment #4: single_log_wo_tirpc.txt.gz --]
[-- Type: application/octet-stream, Size: 2085 bytes --]

[-- Attachment #5: single_log_tirpc.txt.gz --]
[-- Type: application/octet-stream, Size: 2659 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-27 16:52                         ` Christof Koehler
@ 2016-04-28  2:56                           ` Ian Kent
  2016-04-28  3:21                             ` Ian Kent
  2016-04-28  9:10                             ` Christof Koehler
  0 siblings, 2 replies; 49+ messages in thread
From: Ian Kent @ 2016-04-28  2:56 UTC (permalink / raw)
  To: christof.koehler; +Cc: autofs

On Wed, 2016-04-27 at 18:52 +0200, Christof Koehler wrote:
> Hello,
> 
> I have the system running and did first tests. This was 
> interesting although I observed basically the behaviour you were 
> expecting.
> 

I might not get this quite right but I'll try.

What release of Ubuntu are we talking about here?

The thing to keep in mind in all of this is that autofs always needs to
check availability so it doesn't try to mount something where there is
no host available. But that can get in the way sometimes and results
with no host to try the mount.

The history of the availability check for single hosts is a little
interesting.

It was present in the original version 5 release but people wanted to
eliminate the extra traffic so it was removed for the single host case.

Then when the mount option processing was taken into the NFS kernel
client we saw that the kernel RPC can't give up on RPC IOs so easily as
the glibc RPC (for NFS file system corruption reasons) and we started
seeing lengthy waits.

After some discussion that was mitigated to a degree but not enough so
the single host availability check was brought back.

You will probably notice a bug with all of this and that is even though
each protocol (TCP and UDP) is checked, IIRC, the proto= option is not
then added to the mount command (when the original option don't provide
a proto= option). But I digress.

So that's the reason for the availability check and why it's used for
single as well as hosts that resolve to multiple addresses.

The other thing to be aware of is that autofs can't know if a host that
resolves to multiple addresses corresponds to a single host that has
multiple address or if there are multiple distinct hosts that have the
same file system available and the addresses are being used for load
balancing in some way.

Since what's needed for multiple hosts and hosts that resolve to
multiple addresses is essentially the same as what's needed for the
single host availability check the same code is used to check single
host availability. So you will see what appears to be redundant checks
that aren't used.

The other thing is that if libtirpc isn't being used the IPv6 code
exclusion isn't quite right which results in some unexpected behaviour
and is what causes some of the unexpected results we are seeing. 

Even so I'm not sure what to do about the IPv6 code exclusion because
I'm more inclined to make the package require libtirpc and remove the
option to not use it altogether given that libtirpc has been generally
available in distros for quite a while now.

> From what I see without libtirpc (standard package), autofs is as
> expected oblivious to IPv6. It passes the servers hostname to mount,
> which 
> in the case of IPv4 and a single IPv6 address in the end uses the IPv6
> one. 
> In the case of two IPv6 addresses it falls back to IPv4 for some
> reason. So 
> that is unrelated to autofs then (and I should ask somewhere else) ? I
> attached two syslog ouputs (debug level) covering the single IPv6 
> (single_log_wo_tirpc.txt.gz) and double IPv6
> (multi_log_wo_tirpc.txt.gz) cases.

Umm ... we are talking without libtirpc, right?

If the name resolve to a single address the probe is essentially an
availability check and the mount just uses the host name.

So you should see the same behaviour as mount.nfs(8).

The faulty IPv6 code exclusion can cause autofs to think there are
multiple addresses even though there aren't (since the IPv6 addresses
are only partly ignored) and that can result in only an IPv4 address
being used.

So I suspect what your seeing is expected and is probably not worth
investigating further.

I will however have a look at the logs to check.

> 
> I note, though, that mount on its own (mount -tnfs4 core330:/locals
> ...)
> always picks one of the IPv6 addresses and never the IPv4 address if
> both IPv6 addresses are in the DNS. So there is some difference to the
> behaviour when called from autofs. Do you have an idea what that might
> be or what I can do to find that out ?

I think I answered that above, without having looked at the log I
believe it is an autofs problem.

> 
> I rebuilt the stock package "--with-libtirpc" (after removing the
> problematic #ifdef block from rpc_subs.c) against the systems libtirpc
> 0.2.5. Then I tested again the two cases mentioned above, log output
> is
> attached as single_log_tirpc.txt.gz and multi_log_tirpc.txt.gz.

I had problems on 14.04 and don't think I bothered with trying that on
16.04 and just brought the link order changes across as a matter of
course.

I had problems with the build in both so I just went for the latest
version of libtirpc.

I still need to cherry pick some recent autofs patches though, one in
particular fixes a program map regression introduced in 5.1.1 so I'm not
quite done yet with the ppa.

> 
> In the "single" case the IPv6 address is always used as far as I could
> see
> (about 10 tries, fewer in log). The response time is apparently not
> used
> e.g. (see single_log_tirpc.txt.gz)
> Apr 27 16:23:33 core400 automount[2473]: get_nfs_info: nfs v4 rpc ping
> time: 0.000115
> Apr 27 16:23:33 core400 automount[2473]: get_nfs_info: nfs v4 rpc ping
> time: 0.000153
> results in
> Apr 27 16:23:33 core400 automount[2473]: mount_mount: mount(nfs):
> calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev core330:/locals
> /local/core330
> which then apparently decides to use the IPv6 address in its own.

Yep, sounds like the availability check I described, that's expected.

> 
> In the double IPv6 address case I see that all available addresses 
> (IPv4 192.168.220.118, IPv6 GUA 2001:...:118 and IPv6 ULA
> fd5f:...:118) are used 
> to mount and that actual IP addresses are passed to mount instead of a
> hostname (see single example above for the opposite behaviour). The
> choice of address is a direct result of the response time as you
> mentioned. 

Again sounds like what's expected.

> 
> It is unclear to me how it decides if an IP address or an
> hostname should be passed to mount, might this be multi-homed or
> failover behaviour ? But then even with IPv4 and a single IPv6 address
> the host
> sould be considered multi-homed for that purpose and not only if it
> has
> multiple IPv6 addresses ?

Yes, that is a problem I mentioned in an earlier post.

There is (I think) a problem with how autofs decides if a host has
multiple addresses which was introduced when there was a complaint about
it.

I can't remember the details now but the bottom line is that autofs will
consider a host to have multiple addresses if the name resolution
results in two or more addresses for either the IPv6 or the IPv4.

I think that's wrong and I should revert it.
I'll need to try and work out what the complaint was but that could be
difficult.

I'll need to have a look at what makes autofs use the address over the
name. There might be a small problem with that too, not sure.

Anyway, first it needs to think the name resolves to multiple addresses
(but consider the problem above).

I think that even if there ends up being only one entry on the list of
available hosts (consisting of distinct and multi-address hosts that are
responding) it must use the address when there were individual hosts
with multiple addresses. Using the name could end up with mount.nfs
trying a host that isn't responding. But it should use the name when the
entry corresponds to a host that resolves to a single address.

I'll check that.

> Or is it working as designed ?
> 
> Now some opinions before some package rebuild and technical questions:
> 
> On Wed, Apr 27, 2016 at 09:54:38AM +0800, Ian Kent wrote:
> > 
> > Then there's the question of order of addresses to try.
> > Should IPv6 addresses be higher priority than IPv4?
> 
> I now see and understand the question. The behaviour of mount (stand
> alone or called from autofs) is puzzling in this context. Stand alone
> mount does IMHO the right thing:
> 
> I always assumed that there was an agreed preference, e.g. observing 
> for http/ssh connections that IPv6 in fact is preferred before
> eventually falling back to IPv4 (perhaps after a timeout).

There probably is but I haven't integrated that into autofs.
That's what I hope to get from this investigation.

> 
> RFC 6724's Abstract says: 
> "In dual-stack implementations, the destination address
> selection algorithm can consider both IPv4 and IPv6 addresses --
> depending on the available source addresses, the algorithm might
> prefer IPv6 addresses over IPv4 addresses, or vice versa."
> 
> and in Section 10.3.
> "The default policy table gives IPv6 addresses higher precedence than
> IPv4 addresses.  This means that applications will use IPv6 in
> preference to IPv4 when the two are equally suitable."

And these certainly imply I should prefer IPv6 over IPv4.
But they also assume IPv6 usage is prevalent and that's been a long time
coming and I'm not sure it's quite here yet either.

Perhaps it is time to add this preference now anyway.

> 
> This is what /etc/gai.conf (source selection) and "ip addrlabel"
> defaults (destination selection) are based on. But may be I am
> overinterpreting the RFC considering its Abstract.
> 
> As far as I read earlier this preference caused/can cause a great deal
> of 
> pain sometimes.
> 
> 
> > 
> > Should I even use IPv4 addresses when IPv6 addresses are present and
> > fall back to IPv4 if all IPv6 addresses fail?
> Fallback after failure might be reasonable. But I agree that there
> will be
> opinions and my use case is not a general or really complicated one,
> so
> I will abstain.
> 
> > 
> > If there is more than one distinct host and a host with only an IPv4
> > address is "closer" (that's the proximity question and also response
> > time) than another host with only an IPv6 address, what selection
> > policy
> > should I use?
> Yes, I see the point. This is obviously out of the RFC's scope.
> 
> One could argue for a configuration file or compile time option(s) to
> influence address selection, but you know pros and cons of that and I
> do
> not. Especially considering that (some) distros are not even using
> libtirpc
> to begin with.

I think a conservative approach is best.

I think just adding a preference is sufficient for now given that the
availability check is done if the service isn't offered the host won't
be tried.

That also implies IPv4 addresses will be retained and tried as well.

I think the only the only question I need to answer is what influence
(if any) response time should play between IPv4 and IPv6. I think it
best to not use it at all to start with (which I think should be the way
it is now, once a v6 over v4 ordering preference is added).

> Now the package rebuild questions:
> You built libtirpc 1.0.1 from the sourceforge source, put include and
> library files in the system locations by hand and then recompiled the
> autofs package against these ? Or is there a neat trick to avoid
> messing
> in the system locations by hand ? 
> I am unsure how to reproduce what you did.

I'm using launchpad to provide a ppa apt source.

So the installed debs will replace existing packages, notably libtirpc,
rpcbind, nfs-common and autofs.

I tried to use the existing distribution package sources (including the
existing package maintainer patches where appropriate) but had to update
to later distribution source version for at least one (nfs-common on
Trusty I think, is using the Xenial package source). That's apart from
libtirpc which is the latest available version.

Form what I can see existing configuration is left untouched so removing
the debs and the ppa source and installing the distributed debs should
result in what you had before the ppa install.

Don't be confused by the change in autofs configuration location in
autofs 5.1.1 (/etc/autofs.conf).

If you only change the autofs configuration in /etc/default/autofs
(IIRC) that will override the new configuration allowing you to switch
between older and newer versions of autofs without configuration
inconsistencies.

We need to keep an eye out in case I've missed something in the Debian
package install configuration file handling. Back them up before hand,
you can then just put them back and all should be fine.

And as I said I still need to add some patches to the autofs deb so the
ppa can't be used just yet.

Because I'm using launchpad the debs from me can be verified as coming
from me and the build packaging files are (I believe, or can be anyway)
publicly available for you to inspect and build from yourself using the
standard Debian build tools if you wish.

> 
> Another question: What is your expectation in a situation where only
> IPv6 is available, no IPv4 for the server ? Is  the one mentioned in
> #25 of
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737679
> expected to be still a working solution ? I might have an NFS server
> soon
> which, due to a routing conflict difficult to resolve, would only get
> an IPv6 address visible to the clients. I should test this case with a
> dedicated vm as server anyway ...

I think we've covered that above.

The scenario in that bug shows autofs behaving as expected due to lack
of IPv6 support in glibc I think.

I hope that the autofs ppa version will perform the mount fine, as long
as the server is responding but that's one thing we're here to sort out,
;)

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-28  2:56                           ` Ian Kent
@ 2016-04-28  3:21                             ` Ian Kent
  2016-04-28  9:12                               ` Christof Koehler
  2016-04-28  9:10                             ` Christof Koehler
  1 sibling, 1 reply; 49+ messages in thread
From: Ian Kent @ 2016-04-28  3:21 UTC (permalink / raw)
  To: christof.koehler; +Cc: autofs

On Thu, 2016-04-28 at 10:56 +0800, Ian Kent wrote:
> > 
> > Another question: What is your expectation in a situation where only
> > IPv6 is available, no IPv4 for the server ? Is  the one mentioned in
> > #25 of
> > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737679
> > expected to be still a working solution ? I might have an NFS server
> > soon
> > which, due to a routing conflict difficult to resolve, would only
> > get
> > an IPv6 address visible to the clients. I should test this case with
> > a
> > dedicated vm as server anyway ...
> 
> I think we've covered that above.

Actually, maybe not.

There's a rather long story associated with the use of NFSv4 and the the
associated server mount (or export) path.

It depends on kernel version and nfs-utils version.

Put simply Linux NFS originally only allowed the use of "/" for NFSv4
mounts but autofs couldn't work out if a translation was needed because
of limited information available from the server.

So Linux NFS was changed to behave like other implementations and allow
the same paths as is used in NFSv3 (and v2). That has the benefit of
allowed for consistent fall back from v4 to v3 as well.

> 
> The scenario in that bug shows autofs behaving as expected due to lack
> of IPv6 support in glibc I think.
> 
> I hope that the autofs ppa version will perform the mount fine, as
> long
> as the server is responding but that's one thing we're here to sort
> out,
> ;)
> 
> Ian
> --
> To unsubscribe from this list: send the line "unsubscribe autofs" in
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-28  2:56                           ` Ian Kent
  2016-04-28  3:21                             ` Ian Kent
@ 2016-04-28  9:10                             ` Christof Koehler
  2016-04-28 10:50                               ` Ian Kent
  1 sibling, 1 reply; 49+ messages in thread
From: Christof Koehler @ 2016-04-28  9:10 UTC (permalink / raw)
  To: Ian Kent; +Cc: autofs

Hello.

On Thu, Apr 28, 2016 at 10:56:30AM +0800, Ian Kent wrote:
> On Wed, 2016-04-27 at 18:52 +0200, Christof Koehler wrote:
> > Hello,
> > 
> > I have the system running and did first tests. This was 
> > interesting although I observed basically the behaviour you were 
> > expecting.
> > 
> 
> I might not get this quite right but I'll try.
> 
> What release of Ubuntu are we talking about here?
This is the vm with 16.04.

> 
> The other thing to be aware of is that autofs can't know if a host that
> resolves to multiple addresses corresponds to a single host that has
> multiple address or if there are multiple distinct hosts that have the
> same file system available and the addresses are being used for load
> balancing in some way.
OK, I was not fully aware of that.

> 
> I'm more inclined to make the package require libtirpc and remove the
> option to not use it altogether given that libtirpc has been generally
> available in distros for quite a while now.
I might point out that libtirpc is already a hard package dependency of
nfs-common on debian testing and ubuntu 16.04.

> 
> > From what I see without libtirpc (standard package), autofs is as
> > expected oblivious to IPv6. It passes the servers hostname to mount,
> > which 
> > in the case of IPv4 and a single IPv6 address in the end uses the IPv6
> > one. 
> > In the case of two IPv6 addresses it falls back to IPv4 for some
> > reason. So 
> > that is unrelated to autofs then (and I should ask somewhere else) ? I
> > attached two syslog ouputs (debug level) covering the single IPv6 
> > (single_log_wo_tirpc.txt.gz) and double IPv6
> > (multi_log_wo_tirpc.txt.gz) cases.
> 
> Umm ... we are talking without libtirpc, right?
Yes. This is the behaviour with the packages as provided by the
distribution. Same behaviour on 16.04 as observded on 14.04.

> I think I answered that above, without having looked at the log I
> believe it is an autofs problem.

Yes, you are right. I misread the logs and was looking at the line
mount_mount: mount(nfs): root=/local name=core330 what=core330:/locals,
fstype=nfs4, options=rw,intr,nosuid,soft,nodev
instead of
mount(nfs): calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev
192.168.220.118:/locals /local/core330

> 
> I had problems with the build in both so I just went for the latest
> version of libtirpc.
No problems at all, just adding --with-libtirpc as the next to last argument to
configure and un-patching rpc_subs.c did the trick. Built went smooth,
my indirect maps worked without a hitch. Should I sent you a diff output
on the sources ?

> > 
> > It is unclear to me how it decides if an IP address or an
> > hostname should be passed to mount, might this be multi-homed or
> > failover behaviour ? But then even with IPv4 and a single IPv6 address
> > the host
> > sould be considered multi-homed for that purpose and not only if it
> > has
> > multiple IPv6 addresses ?
> 
> Yes, that is a problem I mentioned in an earlier post.
But please keep in mind that I misread logs, see above. So I was
confused.

> > 
> > RFC 6724's Abstract says: 
> > "In dual-stack implementations, the destination address
> > selection algorithm can consider both IPv4 and IPv6 addresses --
> > depending on the available source addresses, the algorithm might
> > prefer IPv6 addresses over IPv4 addresses, or vice versa."
> > 
> > and in Section 10.3.
> > "The default policy table gives IPv6 addresses higher precedence than
> > IPv4 addresses.  This means that applications will use IPv6 in
> > preference to IPv4 when the two are equally suitable."
> 
> And these certainly imply I should prefer IPv6 over IPv4.
> But they also assume IPv6 usage is prevalent and that's been a long time
> coming and I'm not sure it's quite here yet either.
Yes.

> 
> Perhaps it is time to add this preference now anyway.

Perhaps I should point out that there is one more indirect factor for
consideration: mount. 

In principle RFC 6724 also contains recommendations what IPv6 address to
pick (GUA or ULA for example, or among multiple GUAs), in fact that is
its main subject. As far as I understand it, and observed what ssh does,
if there is a GUA and ULA the ULA (fd5f:... in this case) should be
prefered (selected). Obviously mount (mount.nfs4) does not care about
this recommendation either, in my test it choose randomly (?) one or the
other IPv6 address. May be due to the order of addresses in the DNS
reply. I will try to understand this better. 

So even mount does not fully care about RFC 6724 or my basic assumption
about the whole process is wrong in some way. I have to work that out. In
worst case my expectation has been wrong from the beginning when I
brought this question up.

> 
> I think the only the only question I need to answer is what influence
> (if any) response time should play between IPv4 and IPv6. I think it
> best to not use it at all to start with (which I think should be the way
> it is now, once a v6 over v4 ordering preference is added).
In principle I would agree, ignoring RFC6724 IPv6 address selection
recommendations which, as you pointed out, are not covering this case
when response times are important.

My current understanding of RFC6724 leads to two remarks then:

As far as I understand the log output you are deciding on a hard
comparison basis, so even a one microsecond difference decides.
However, ping times (response times) on ethernet effectively show
random variations, sometimes in the millisecond range.
In numerics (when comparing results) you would define an intervall where 
numbers are considered to be effectively equal to catch rounding errors.
One could use RFC6724 to decide within such an interval. I have no idea
about the complexity involved, though. How does ssh handle that ?
Probably it relies to some service mechanism to get this handled for it.

With the above in mind  people also might like to pass hard IPv4 and IPv6
addresses from the map to autofs to not rely on its decisions (because they (think)
know their network better). I vaguely remember I could not put an []
quoted IPv6 address in my executable map while IPv4 was fine or
something like that ? Was there an earlier mail ?

> 
> > I am unsure how to reproduce what you did.
> 
> I'm using launchpad to provide a ppa apt source.
> 
> So the installed debs will replace existing packages, notably libtirpc,
> rpcbind, nfs-common and autofs.
> 
Yes, I did not think you had libtirpc 1.0.1 as debian package. That
explains it then. I will have to check how to do this then.

> 
> Don't be confused by the change in autofs configuration location in
> autofs 5.1.1 (/etc/autofs.conf).
> 
> If you only change the autofs configuration in /etc/default/autofs
> (IIRC) that will override the new configuration allowing you to switch
> between older and newer versions of autofs without configuration
> inconsistencies.
OK.

> 
> I hope that the autofs ppa version will perform the mount fine, as long
> as the server is responding but that's one thing we're here to sort out,
> ;)
> 
If you could provide a link I can try that. Making a snapshot before and there
are no problems going back.

Best Regards

Christof

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-28  3:21                             ` Ian Kent
@ 2016-04-28  9:12                               ` Christof Koehler
  0 siblings, 0 replies; 49+ messages in thread
From: Christof Koehler @ 2016-04-28  9:12 UTC (permalink / raw)
  To: Ian Kent; +Cc: autofs

Hello,

so about the same problem quota/ rpc.quotad has with the export root. I
had that already.

I will simply try it by installing the server os into another vm. 

Best Regards

Christof

On Thu, Apr 28, 2016 at 11:21:51AM +0800, Ian Kent wrote:
> On Thu, 2016-04-28 at 10:56 +0800, Ian Kent wrote:
> > > 
> > > Another question: What is your expectation in a situation where only
> > > IPv6 is available, no IPv4 for the server ? Is  the one mentioned in
> > > #25 of
> > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737679
> > > expected to be still a working solution ? I might have an NFS server
> > > soon
> > > which, due to a routing conflict difficult to resolve, would only
> > > get
> > > an IPv6 address visible to the clients. I should test this case with
> > > a
> > > dedicated vm as server anyway ...
> > 
> > I think we've covered that above.
> 
> Actually, maybe not.
> 
> There's a rather long story associated with the use of NFSv4 and the the
> associated server mount (or export) path.
> 
> It depends on kernel version and nfs-utils version.
> 
> Put simply Linux NFS originally only allowed the use of "/" for NFSv4
> mounts but autofs couldn't work out if a translation was needed because
> of limited information available from the server.
> 
> So Linux NFS was changed to behave like other implementations and allow
> the same paths as is used in NFSv3 (and v2). That has the benefit of
> allowed for consistent fall back from v4 to v3 as well.
> 
> > 
> > The scenario in that bug shows autofs behaving as expected due to lack
> > of IPv6 support in glibc I think.
> > 
> > I hope that the autofs ppa version will perform the mount fine, as
> > long
> > as the server is responding but that's one thing we're here to sort
> > out,
> > ;)
> > 
> > Ian
> > --
> > To unsubscribe from this list: send the line "unsubscribe autofs" in

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-28  9:10                             ` Christof Koehler
@ 2016-04-28 10:50                               ` Ian Kent
  2016-04-28 11:26                                 ` Christof Koehler
  0 siblings, 1 reply; 49+ messages in thread
From: Ian Kent @ 2016-04-28 10:50 UTC (permalink / raw)
  To: christof.koehler; +Cc: autofs

On Thu, 2016-04-28 at 11:10 +0200, Christof Koehler wrote:
> 
> > 
> > Perhaps it is time to add this preference now anyway.
> 
> Perhaps I should point out that there is one more indirect factor for
> consideration: mount. 
> 
> In principle RFC 6724 also contains recommendations what IPv6 address
> to
> pick (GUA or ULA for example, or among multiple GUAs), in fact that is
> its main subject. As far as I understand it, and observed what ssh
> does,
> if there is a GUA and ULA the ULA (fd5f:... in this case) should be
> prefered (selected). Obviously mount (mount.nfs4) does not care about
> this recommendation either, in my test it choose randomly (?) one or
> the
> other IPv6 address. May be due to the order of addresses in the DNS
> reply. I will try to understand this better. 

Right, there's no guarantee of order of returned results from DNS, I
believe.

> 
> So even mount does not fully care about RFC 6724 or my basic
> assumption
> about the whole process is wrong in some way. I have to work that out.
> In
> worst case my expectation has been wrong from the beginning when I
> brought this question up.

Sounds like I'll need to actually have a look at that RFC before doing
anything, ;)

> 
> > 
> > I think the only the only question I need to answer is what
> > influence
> > (if any) response time should play between IPv4 and IPv6. I think it
> > best to not use it at all to start with (which I think should be the
> > way
> > it is now, once a v6 over v4 ordering preference is added).
> In principle I would agree, ignoring RFC6724 IPv6 address selection
> recommendations which, as you pointed out, are not covering this case
> when response times are important.
> 
> My current understanding of RFC6724 leads to two remarks then:
> 
> As far as I understand the log output you are deciding on a hard
> comparison basis, so even a one microsecond difference decides.
> However, ping times (response times) on ethernet effectively show
> random variations, sometimes in the millisecond range.

True but the worst case is you end up with a trivial load balancing
effect.

Say, for example, you had multiple distinct interfaces on a machine to
increase available throughput (believe it or not I've seen it done) then
this sort of load balancing could be beneficial.

But then we're talking about actual IPv6 address type selection ...

> In numerics (when comparing results) you would define an intervall
> where 
> numbers are considered to be effectively equal to catch rounding
> errors.
> One could use RFC6724 to decide within such an interval. I have no
> idea
> about the complexity involved, though. How does ssh handle that ?
> Probably it relies to some service mechanism to get this handled for
> it.

Still, it may be worth considering, yes.

> 
> With the above in mind  people also might like to pass hard IPv4 and
> IPv6
> addresses from the map to autofs to not rely on its decisions (because
> they (think)
> know their network better). I vaguely remember I could not put an []
> quoted IPv6 address in my executable map while IPv4 was fine or
> something like that ? Was there an earlier mail ?

Mmm .. you should be able to put addresses in the map.

I recall some problems around the issue of bracketed addresses.

Internally, some operations require not having them while others require
them and when I originally wrote the IPv6 code I didn't consider this
(in fact I wasn't ware of the conventions that were developing at the
time).

I'm fairly sure the "with or without square brackets" isn't consistent
or documented (or convention usage in autofs decided for that matter)
from an autofs user perspective.

That's something that needs to be fixed.

> 
> > 
> > > I am unsure how to reproduce what you did.
> > 
> > I'm using launchpad to provide a ppa apt source.
> > 
> > So the installed debs will replace existing packages, notably
> > libtirpc,
> > rpcbind, nfs-common and autofs.
> > 
> Yes, I did not think you had libtirpc 1.0.1 as debian package. That
> explains it then. I will have to check how to do this then.
> 
> > 
> > Don't be confused by the change in autofs configuration location in
> > autofs 5.1.1 (/etc/autofs.conf).
> > 
> > If you only change the autofs configuration in /etc/default/autofs
> > (IIRC) that will override the new configuration allowing you to
> > switch
> > between older and newer versions of autofs without configuration
> > inconsistencies.
> OK.
> 
> > 
> > I hope that the autofs ppa version will perform the mount fine, as
> > long
> > as the server is responding but that's one thing we're here to sort
> > out,
> > ;)
> > 
> If you could provide a link I can try that. Making a snapshot before
> and there
> are no problems going back.

The ppa is done, to the extent that I wanted too, so check it out.
Have a look at https://launchpad.net/~raven-au.

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-28 10:50                               ` Ian Kent
@ 2016-04-28 11:26                                 ` Christof Koehler
  2016-04-28 12:40                                   ` Christof Koehler
  2016-04-29  1:54                                   ` Ian Kent
  0 siblings, 2 replies; 49+ messages in thread
From: Christof Koehler @ 2016-04-28 11:26 UTC (permalink / raw)
  To: Ian Kent; +Cc: autofs

Hello,

On Thu, Apr 28, 2016 at 06:50:58PM +0800, Ian Kent wrote:
> On Thu, 2016-04-28 at 11:10 +0200, Christof Koehler wrote:
> 
> Sounds like I'll need to actually have a look at that RFC before doing
> anything, ;)
These might be nicer reads:
http://biplane.com.au/blog/?p=22
http://biplane.com.au/blog/?p=30

Please note that the actual "ip addrlabel" output on a pristine 16.04
is 
prefix ::1/128 label 0 
prefix ::/96 label 3 
prefix ::ffff:0.0.0.0/96 label 4 
prefix 2001::/32 label 6 
prefix 2001:10::/28 label 7 
prefix 3ffe::/16 label 12 
prefix 2002::/16 label 2 
prefix fec0::/10 label 11 
prefix fc00::/7 label 5 
prefix ::/0 label 1
so that catches ULAs in the fc00::/7 already with a label separating
them. I additional add
prefix 2001:638:708:1261:2000::/96 label 99 
prefix 2001:638:708:1261:1000::/96 label 99
but that should not matter.

> > However, ping times (response times) on ethernet effectively show
> > random variations, sometimes in the millisecond range.
> 
> True but the worst case is you end up with a trivial load balancing
> effect.

If addresses are equivalent that is a desirable effect. My goal however
was to distinguish between GUA and ULA relying on the RFC for selection
to avoid headaches. I will have to rethink this for sure.

> Say, for example, you had multiple distinct interfaces on a machine to
> increase available throughput (believe it or not I've seen it done) then
> this sort of load balancing could be beneficial.

Yes, been there, done that; but I would consider that something no longer needed 
in this enlightened age:
https://help.ubuntu.com/community/UbuntuBonding#Descriptions_of_bonding_modes
Especially Mode 4. But I think you know them :-)


> 
> Mmm .. you should be able to put addresses in the map.
I will try again to make sure.

> 
> The ppa is done, to the extent that I wanted too, so check it out.
> Have a look at https://launchpad.net/~raven-au.
> 
Thank you, I will try and report back.


Best Regards

Christof

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-28 11:26                                 ` Christof Koehler
@ 2016-04-28 12:40                                   ` Christof Koehler
  2016-04-29  1:54                                   ` Ian Kent
  1 sibling, 0 replies; 49+ messages in thread
From: Christof Koehler @ 2016-04-28 12:40 UTC (permalink / raw)
  To: Ian Kent; +Cc: autofs

Hello,
I should add http://biplane.com.au/blog/?p=122 for the easy read on 
destination address selection (/etc/gai.conf).

Quite a machinery if the application chooses to use it.

Best Regards

Christof

On Thu, Apr 28, 2016 at 01:26:48PM +0200, Christof Koehler wrote:
> Hello,
> 
> On Thu, Apr 28, 2016 at 06:50:58PM +0800, Ian Kent wrote:
> > On Thu, 2016-04-28 at 11:10 +0200, Christof Koehler wrote:
> > 
> > Sounds like I'll need to actually have a look at that RFC before doing
> > anything, ;)
> These might be nicer reads:
> http://biplane.com.au/blog/?p=22
> http://biplane.com.au/blog/?p=30
> 
> Please note that the actual "ip addrlabel" output on a pristine 16.04
> is 
> prefix ::1/128 label 0 
> prefix ::/96 label 3 
> prefix ::ffff:0.0.0.0/96 label 4 
> prefix 2001::/32 label 6 
> prefix 2001:10::/28 label 7 
> prefix 3ffe::/16 label 12 
> prefix 2002::/16 label 2 
> prefix fec0::/10 label 11 
> prefix fc00::/7 label 5 
> prefix ::/0 label 1
> so that catches ULAs in the fc00::/7 already with a label separating
> them. I additional add
> prefix 2001:638:708:1261:2000::/96 label 99 
> prefix 2001:638:708:1261:1000::/96 label 99
> but that should not matter.
> 
> > > However, ping times (response times) on ethernet effectively show
> > > random variations, sometimes in the millisecond range.
> > 
> > True but the worst case is you end up with a trivial load balancing
> > effect.
> 
> If addresses are equivalent that is a desirable effect. My goal however
> was to distinguish between GUA and ULA relying on the RFC for selection
> to avoid headaches. I will have to rethink this for sure.
> 
> > Say, for example, you had multiple distinct interfaces on a machine to
> > increase available throughput (believe it or not I've seen it done) then
> > this sort of load balancing could be beneficial.
> 
> Yes, been there, done that; but I would consider that something no longer needed 
> in this enlightened age:
> https://help.ubuntu.com/community/UbuntuBonding#Descriptions_of_bonding_modes
> Especially Mode 4. But I think you know them :-)
> 
> 
> > 
> > Mmm .. you should be able to put addresses in the map.
> I will try again to make sure.
> 
> > 
> > The ppa is done, to the extent that I wanted too, so check it out.
> > Have a look at https://launchpad.net/~raven-au.
> > 
> Thank you, I will try and report back.
> 
> 
> Best Regards
> 
> Christof
> 
> -- 
> Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
> Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> 28359 Bremen  
> 
> PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> --
> To unsubscribe from this list: send the line "unsubscribe autofs" in

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-28 11:26                                 ` Christof Koehler
  2016-04-28 12:40                                   ` Christof Koehler
@ 2016-04-29  1:54                                   ` Ian Kent
  2016-04-29 14:10                                     ` Christof Koehler
  1 sibling, 1 reply; 49+ messages in thread
From: Ian Kent @ 2016-04-29  1:54 UTC (permalink / raw)
  To: christof.koehler; +Cc: autofs

On Thu, 2016-04-28 at 13:26 +0200, Christof Koehler wrote:
> Hello,
> 
> On Thu, Apr 28, 2016 at 06:50:58PM +0800, Ian Kent wrote:
> > On Thu, 2016-04-28 at 11:10 +0200, Christof Koehler wrote:
> > 
> > Sounds like I'll need to actually have a look at that RFC before
> > doing
> > anything, ;)
> These might be nicer reads:
> http://biplane.com.au/blog/?p=22
> http://biplane.com.au/blog/?p=30
> 
> Please note that the actual "ip addrlabel" output on a pristine 16.04
> is 
> prefix ::1/128 label 0 
> prefix ::/96 label 3 
> prefix ::ffff:0.0.0.0/96 label 4 
> prefix 2001::/32 label 6 
> prefix 2001:10::/28 label 7 
> prefix 3ffe::/16 label 12 
> prefix 2002::/16 label 2 
> prefix fec0::/10 label 11 
> prefix fc00::/7 label 5 
> prefix ::/0 label 1
> so that catches ULAs in the fc00::/7 already with a label separating
> them. I additional add
> prefix 2001:638:708:1261:2000::/96 label 99 
> prefix 2001:638:708:1261:1000::/96 label 99
> but that should not matter.
> 
> > > However, ping times (response times) on ethernet effectively show
> > > random variations, sometimes in the millisecond range.
> > 
> > True but the worst case is you end up with a trivial load balancing
> > effect.
> 
> If addresses are equivalent that is a desirable effect. My goal
> however
> was to distinguish between GUA and ULA relying on the RFC for
> selection
> to avoid headaches. I will have to rethink this for sure.

I've had a quick look at this and I'm not sure I need to do much at all
to help with this.

The proximity values I use for list insertion are fairly simple mined
and I use gataddrinfo(3) everywhere.

If I add an additional sort value for IPv6 addresses that lies between
local address proximity and the next lower proximity sort value and use
the order of addresses returned by getaddrinfi(3) I should get an order
that is rfc 3484 with some adjustments made by glibc for real world
experience and possibly some adjustments related to rfc 6724 to the
extent glibc has been willing to adopt them.

All I would need to do then is check the local interface comparison code
since addresses corresponding to a local interface have special
handling.

The only difficulty would be with IPv4 addresses that also correspond to
a local interface, due to the special handling.

Would that approach help with what you're trying to achieve?

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-29  1:54                                   ` Ian Kent
@ 2016-04-29 14:10                                     ` Christof Koehler
  2016-04-29 14:42                                       ` Christof Koehler
  2016-04-30  3:21                                       ` Ian Kent
  0 siblings, 2 replies; 49+ messages in thread
From: Christof Koehler @ 2016-04-29 14:10 UTC (permalink / raw)
  To: Ian Kent; +Cc: autofs

Hello,

> 
> Would that approach help with what you're trying to achieve?
> 

I am not sure of anything right now anymore after noticing what mount 
does. On top, I am not sure if I understand what you are proposing :-)

So, please allow me to write down what my thinking was and what I 
thought I needed instead of answering straight away. I will try to be
brief about it. Maybe you have a different perspective on what I am 
trying to do and can point out if it is unreasonable or if it is
something with can/should be solved on mounts or autofs's level at all
or not at all.

Independent of that maybe a) fixing the situation where
autofs/mount falls back to IPv4, which I understand is a bug, and b)
having the possibility to pass IPv6 addresses as a result of an
exectutable map lookup (as is possible with IPv4 adresses) is what I really 
need.  I assume these two might be easier to do ? If I can pass IPv6 addresses 
from the exectuable map I can shell script what I think I need myselves. Of
course I still have to check if passing IPv6 is actually not possible as
I speculated earlier.

But please read on keeping in mind that the original observation which started 
this was my surprise discovering autofs/mount suddenly falling back to IPv4 while
I was still naively assuming IPv6 would simply work at that time.

As you know it is completely normal with IPv6 that a machine (server or client) 
has several IP adresses: link local fe80:: (always there, I will ignore
it), one (or more) statically assigned 2001:: GUAs (-> DNS AAAA), dynamically 
assigned GUA/derived GUA privacy address, fd5f:: ULA (should not be in public 
visible DNS and should not get routed beyond the organizations boundary) and on 
top of that one IPv4 address.

In the first (out of the box) setup we had (GUA only) mount/autofs (and 
everything else like ssh) were happiliy using the privacy address with limited 
lifetime to connect to (NFS) servers, both workstations and dedicted 
fileservers. This strikes me as problematic for several reasons:
1. The privacy address is supposed to change after some time (old
   becomes deprecated), so I cannot easily identify the client on the server
2. I have to NFS export unconditionally to at least a whole /64. I like to
   export on a per client basis, either hostname or IP; but see [1]
3. If the lifetime of the privacy address ends it becomes deprecated and
   (I did not test that) NFS requests my then suddenly arrive from the
   current privacy address while the mount was made via a no longer
   existing (or at least deprecated) one ? Not sure, but I would like to avoid 
   situations like that from the beginning.

Manually adding the two addrlabels mentioned in my previous mail makes sure 
that the clients will use their statically assigned GUAs to connect to the
servers if using mount or autofs with only a single IPv6 GUA entry in the
private DNS.

Still, I was not completely at ease with using GUAs like this:
1. You have to make sure/be sure the manual addrlabel is always there
   and you might forget that there was a modification to defaults at
   inconvenient moments ("principle of least undocumented change" ;-)
2. The NFS servers/clients are on a GUAs and might leak in theory traffic 
   all over the internet. In our situation we have to use VRF routing on the
   Universities Cisco 6500 Routers, one typo and we get world
   accessible. Of course there a firewalls and ip6table rules on the
   servers themselves. Also client traffic might get misdirected and leak out 
   on a GUA. On top,  rpc listening on every address it can find and the 
   kitchen sink is a little problem anyway.  See also [1].
3. The NFS servers share a /64 with random laptops; we (i.e. "me" ) could put
   different VLANs on different wall outlets, but in practice with the
   way people (scientists) behave ...

In theory using ULAs instead of GUAs for NFS sounds like a nice thing
then. 

Favouring ULA over GUA if possible is the RFC's default, so no manual
addrlabel required.
Internal traffic would use ULAs (which all routers here blackhole) and
therefore stays internal. Outside DNS queries would not resolve ULAs
anyway. Only traffic directed outside goes outside using the 
approrpiate GUAs. There is a weak separation from the laptops, they can
still ssh in via the static GUA assigned to every server (workstation),
but I can restrict NFS exports to the known ULAs easily. 
On top, in the unlikely event that we ever have to
change GUAs ("renumbering" in IPv6 terms) the ULAs would stay stable.

Only: neither mount (which I just discovered now) nor autofs take the ULA 
vs. GUA preference or the possibility that not all addresses might be
equal into account as I initially assumed, with autofs eventually even falling 
back to IPv4 due to the bug you mentioned. So this is where my idea to
use ULAs clearly does not work. I am no longer sure it should work,
anyway.  Also, as you can see I could work with GUAs only, but someone else 
might stumble upon the same situation later if IPv6 ever gets real widespread.


Thank you very much for reading all this !


Best Regards

Christof

[1] I am aware that with NFS4 the solution is of course to use Kerberos
security. However, currently the old cluster (Ubuntu 10.04 with hand
rolled kernels, drivers and OFED stack; tcp6 transport for NFS is only
available on 10.10 or later) is using the same servers and when a
Kerberos Ticket runs out while a calculation is running (think 10 day
jobs) you have a problem. Also queuing systems (Toreque/Maui, SLURM) are not 
really able to take care of Kerberos for the user. This situation will change 
with the new cluster which is completely separated.  Then I will think about 
moving to Kerberos again.


-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-29 14:10                                     ` Christof Koehler
@ 2016-04-29 14:42                                       ` Christof Koehler
  2016-04-30  3:21                                       ` Ian Kent
  1 sibling, 0 replies; 49+ messages in thread
From: Christof Koehler @ 2016-04-29 14:42 UTC (permalink / raw)
  To: Ian Kent; +Cc: autofs

I should add that routers usually do not carry routes for ULAs in the
first place, because every individual can generate his own addresses out
of fc00::/7 and there is no central registration authority or even a 
requirement. Our packets would in the worst case vanish at the DFN [1] 
exchange nodes to the internet if they ever would leak out of the 
universities network. And there is no route into the universities
network for any ULA. So it is IMHO a bit better than pure security by obscurity.

[1] https://en.wikipedia.org/wiki/Deutsches_Forschungsnetz

On Fri, Apr 29, 2016 at 04:10:44PM +0200, Christof Koehler wrote:
> Hello,
> 
> > 
> > Would that approach help with what you're trying to achieve?
> > 
> 
> I am not sure of anything right now anymore after noticing what mount 
> does. On top, I am not sure if I understand what you are proposing :-)
> 
> So, please allow me to write down what my thinking was and what I 
> thought I needed instead of answering straight away. I will try to be
> brief about it. Maybe you have a different perspective on what I am 
> trying to do and can point out if it is unreasonable or if it is
> something with can/should be solved on mounts or autofs's level at all
> or not at all.
> 
> Independent of that maybe a) fixing the situation where
> autofs/mount falls back to IPv4, which I understand is a bug, and b)
> having the possibility to pass IPv6 addresses as a result of an
> exectutable map lookup (as is possible with IPv4 adresses) is what I really 
> need.  I assume these two might be easier to do ? If I can pass IPv6 addresses 
> from the exectuable map I can shell script what I think I need myselves. Of
> course I still have to check if passing IPv6 is actually not possible as
> I speculated earlier.
> 
> But please read on keeping in mind that the original observation which started 
> this was my surprise discovering autofs/mount suddenly falling back to IPv4 while
> I was still naively assuming IPv6 would simply work at that time.
> 
> As you know it is completely normal with IPv6 that a machine (server or client) 
> has several IP adresses: link local fe80:: (always there, I will ignore
> it), one (or more) statically assigned 2001:: GUAs (-> DNS AAAA), dynamically 
> assigned GUA/derived GUA privacy address, fd5f:: ULA (should not be in public 
> visible DNS and should not get routed beyond the organizations boundary) and on 
> top of that one IPv4 address.
> 
> In the first (out of the box) setup we had (GUA only) mount/autofs (and 
> everything else like ssh) were happiliy using the privacy address with limited 
> lifetime to connect to (NFS) servers, both workstations and dedicted 
> fileservers. This strikes me as problematic for several reasons:
> 1. The privacy address is supposed to change after some time (old
>    becomes deprecated), so I cannot easily identify the client on the server
> 2. I have to NFS export unconditionally to at least a whole /64. I like to
>    export on a per client basis, either hostname or IP; but see [1]
> 3. If the lifetime of the privacy address ends it becomes deprecated and
>    (I did not test that) NFS requests my then suddenly arrive from the
>    current privacy address while the mount was made via a no longer
>    existing (or at least deprecated) one ? Not sure, but I would like to avoid 
>    situations like that from the beginning.
> 
> Manually adding the two addrlabels mentioned in my previous mail makes sure 
> that the clients will use their statically assigned GUAs to connect to the
> servers if using mount or autofs with only a single IPv6 GUA entry in the
> private DNS.
> 
> Still, I was not completely at ease with using GUAs like this:
> 1. You have to make sure/be sure the manual addrlabel is always there
>    and you might forget that there was a modification to defaults at
>    inconvenient moments ("principle of least undocumented change" ;-)
> 2. The NFS servers/clients are on a GUAs and might leak in theory traffic 
>    all over the internet. In our situation we have to use VRF routing on the
>    Universities Cisco 6500 Routers, one typo and we get world
>    accessible. Of course there a firewalls and ip6table rules on the
>    servers themselves. Also client traffic might get misdirected and leak out 
>    on a GUA. On top,  rpc listening on every address it can find and the 
>    kitchen sink is a little problem anyway.  See also [1].
> 3. The NFS servers share a /64 with random laptops; we (i.e. "me" ) could put
>    different VLANs on different wall outlets, but in practice with the
>    way people (scientists) behave ...
> 
> In theory using ULAs instead of GUAs for NFS sounds like a nice thing
> then. 
> 
> Favouring ULA over GUA if possible is the RFC's default, so no manual
> addrlabel required.
> Internal traffic would use ULAs (which all routers here blackhole) and
> therefore stays internal. Outside DNS queries would not resolve ULAs
> anyway. Only traffic directed outside goes outside using the 
> approrpiate GUAs. There is a weak separation from the laptops, they can
> still ssh in via the static GUA assigned to every server (workstation),
> but I can restrict NFS exports to the known ULAs easily. 
> On top, in the unlikely event that we ever have to
> change GUAs ("renumbering" in IPv6 terms) the ULAs would stay stable.
> 
> Only: neither mount (which I just discovered now) nor autofs take the ULA 
> vs. GUA preference or the possibility that not all addresses might be
> equal into account as I initially assumed, with autofs eventually even falling 
> back to IPv4 due to the bug you mentioned. So this is where my idea to
> use ULAs clearly does not work. I am no longer sure it should work,
> anyway.  Also, as you can see I could work with GUAs only, but someone else 
> might stumble upon the same situation later if IPv6 ever gets real widespread.
> 
> 
> Thank you very much for reading all this !
> 
> 
> Best Regards
> 
> Christof
> 
> [1] I am aware that with NFS4 the solution is of course to use Kerberos
> security. However, currently the old cluster (Ubuntu 10.04 with hand
> rolled kernels, drivers and OFED stack; tcp6 transport for NFS is only
> available on 10.10 or later) is using the same servers and when a
> Kerberos Ticket runs out while a calculation is running (think 10 day
> jobs) you have a problem. Also queuing systems (Toreque/Maui, SLURM) are not 
> really able to take care of Kerberos for the user. This situation will change 
> with the new cluster which is completely separated.  Then I will think about 
> moving to Kerberos again.
> 
> 
> -- 
> Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
> Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> 28359 Bremen  
> 
> PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> --
> To unsubscribe from this list: send the line "unsubscribe autofs" in

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-29 14:10                                     ` Christof Koehler
  2016-04-29 14:42                                       ` Christof Koehler
@ 2016-04-30  3:21                                       ` Ian Kent
  2016-04-30 11:36                                         ` Christof Koehler
  1 sibling, 1 reply; 49+ messages in thread
From: Ian Kent @ 2016-04-30  3:21 UTC (permalink / raw)
  To: christof.koehler; +Cc: autofs

On Fri, 2016-04-29 at 16:10 +0200, Christof Koehler wrote:
> Hello,
> 
> > 
> > Would that approach help with what you're trying to achieve?
> > 
> 
> I am not sure of anything right now anymore after noticing what mount 
> does. On top, I am not sure if I understand what you are proposing :-)

I'll need to review what we've said already for the problem with
mount.nfs(8).

> 
> So, please allow me to write down what my thinking was and what I 
> thought I needed instead of answering straight away. I will try to be
> brief about it. Maybe you have a different perspective on what I am 
> trying to do and can point out if it is unreasonable or if it is
> something with can/should be solved on mounts or autofs's level at all
> or not at all.

I can probably make suggestions but, TBH, I haven't paid enough
attention to IPv6 and the book that I originally bought to get basic
information is quite old and is out of date.

So I'm on a bit of a catchup with this.

I'll need to re-read what we've said later in this mail because there
seems to be quite a few moving parts.

I think the problems your tackling need to be simplified a lot to make
progress.

Trying to hold the big picture all at once is going to be confusing and
will make the issues seem much more complicated than they are. That
doesn't mean the big picture should be ignored when in doing this, just
not letting it block progress is the goal.

So lets take a small chunk of the problem, resolve it and look at what's
left.

> 
> Independent of that maybe a) fixing the situation where
> autofs/mount falls back to IPv4, which I understand is a bug, and b)

Partly, the situation I described so far is, IPv6 won't work at all if
not using libtirpc, that's fixed by building autofs with libtirpc
support, no big deal there, we have that already.

There is a problem with how autofs decides if a server has multiple
addresses, not hard to fix but I should check why I changed it, the
person that reported the problem might have had a good reason for
requesting the change. And may actually be what we want in the case here
anyway.

I believe we originally started talking about IPv4 fall back because of
the lack of libtirpc support in Ubuntu autofs.

That lead to a discussion about how autofs establishes the order of host
addresses it tries and I also said that, for IPv6, that's not properly
done. Unfortunately that became clouded with talk of other aspects of
what the autofs availability probe does.

The other thing that was discussed was how autofs decides whether to use
an address or a name. The answer to that is simple, if autofs thinks a
host has more than one address it will use the address not the name in
mount attempts.

That may need some work due to inconsistencies between usage of returned
addresses (having multiple addresses) and hosts available for mount
(addresses that responded to the availability check).

There was also uncertainty about how mount.nfs(8) ends with an address.
Lets leave that alone for the moment.

So I went looking and found that the order of addresses returned by
getaddrinfo(3) is in fact a modified rfc 3484 ordering.

I then responded saying this ordering may be sufficient, and a fairly
simple way to implement the needed address selection (and I mostly
ignored IPv4 with this suggestion).

I guess what I'm saying for this bit of the problem is, I think I can
just use the order of hosts returned by getaddrinfo(3) for address
ordering (ie. the order mounts are tried) and how to decided if a host
has multiple addresses should just be worked out along the way,
depending on what we actually need.

Having done that adding an autofs configuration option to not use IPv4
addresses for hosts with multiple addresses should be sufficient to
force IPv6 only use. Also not difficult but I Think will have some
special cases to consider. I didn't mention this before because I
thought the first thing to resolve was address selection.

As I say, I'll need to re-read what we have said and compare again to
the rfc ordering to work out if that is close to what you need but I had
the impression it was.

The remaining problem is that autofs will remove host addresses from the
list if it thinks they are not responding.

Availability really must be checked because trying to mount from a host
that isn't responding can result in lengthy delays which is really bad
for interactive use.

This is something of an autofs problem in itself because autofs can't
know if a host is really unavailable or simply rebooting.

This is the main reason for lengthy waits, the kernel RPC can't afford
to give up just because an RPC call takes longer than expected for fear
of causing file system corruption for NFS mounts.

But having said that, the availability check can be bypassed, if
required, by setting single autofs configuration option.

Umm .. am I in close to what we've discussed or have I misunderstood?

Will this approach move closer to what you need?

> having the possibility to pass IPv6 addresses as a result of an
> exectutable map lookup (as is possible with IPv4 adresses) is what I
> really 
> need.  I assume these two might be easier to do ? If I can pass IPv6
> addresses 
> from the exectuable map I can shell script what I think I need
> myselves. Of
> course I still have to check if passing IPv6 is actually not possible
> as
> I speculated earlier.

Right, but AFAIK (and there might be problems with it) I think that is a
mater of address specification.

The brackets vs. without brackets usage for example.

There's also the question of whether autofs handles link local addresses
properly, I think they can (or are required to) specify an interface as
well.

Certainly, there's plenty of room for inconsistent handling of addresses
so I probably will need to spend time on it.

I haven't yet had a chance to study the logs you sent so I may get more
information from them. Was there an example of this in those?

Anyway, with a little work this should be resolved and is something
that's needed.

I'm going to stop here because I think sorting out these two things
needs to be done before re-assessing what the remaining situation is.

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-30  3:21                                       ` Ian Kent
@ 2016-04-30 11:36                                         ` Christof Koehler
  2016-04-30 15:15                                           ` Christof Koehler
                                                             ` (2 more replies)
  0 siblings, 3 replies; 49+ messages in thread
From: Christof Koehler @ 2016-04-30 11:36 UTC (permalink / raw)
  To: Ian Kent; +Cc: autofs

Hello.

On Sat, Apr 30, 2016 at 11:21:11AM +0800, Ian Kent wrote:
> On Fri, 2016-04-29 at 16:10 +0200, Christof Koehler wrote:
> > Hello,
> > 
> > > 
> > > Would that approach help with what you're trying to achieve?
> > > 
> > 
> > I am not sure of anything right now anymore after noticing what mount 
> > does. On top, I am not sure if I understand what you are proposing :-)
> 
> I'll need to review what we've said already for the problem with
> mount.nfs(8).
Just to be clear: I am not refering to the fallback to IPv4, that has
indeed been covered, see below. I am refering to its stand alone, i.e. called
by hand on the command line, behaviour to alternate between available IPv6 
addresses. I did not observe this before and it makes me unsure if my
initial request for autofs's behaviour might have been unreasonable.

> > 
> > Independent of that maybe a) fixing the situation where
> > autofs/mount falls back to IPv4, which I understand is a bug, and b)
> 
> Partly, the situation I described so far is, IPv6 won't work at all if
> not using libtirpc, that's fixed by building autofs with libtirpc
> support, no big deal there, we have that already.
> 
> I believe we originally started talking about IPv4 fall back because of
> the lack of libtirpc support in Ubuntu autofs.
Yes. 

Eventually the current behaviour could be documented as "working for
now, not guaranteed to work at all without libtirpc in future releases" 
as a compromise ?

It would have been helpful if the man page would have included a hint that
libtirpc is a prerequisite for IPv6, similar to its mentioning in nfs(5) 
(search "TI-RPC").

> There was also uncertainty about how mount.nfs(8) ends with an address.
> Lets leave that alone for the moment.
This is clearly not an autofs issue. I will probably ask
somewhere else if I cannot find an answer myselves.

mount.nfs4(8) does not mentions IPv6 at all, and nfs(5) does not give
any information about a possible address selection process.

> 
> So I went looking and found that the order of addresses returned by
> getaddrinfo(3) is in fact a modified rfc 3484 ordering.
> 
> I then responded saying this ordering may be sufficient, and a fairly
> simple way to implement the needed address selection (and I mostly
> ignored IPv4 with this suggestion).
My problem is that I did/do not fully understand the details of the
sorting process you decribed. Now just reading getaddrinfo(3) saying "The  
sorting  function  used  within getaddrinfo()  is defined  in  RFC 3484" I would
say that the list as returned by getaddrinfo should be what I wanted (I am not 
sure what you mean with "modified", the tweaking of gai.conf ?).

That is why I tried to explain the big picture. For me it is important that it 
consistently selects the ULA unless specifically instructed to do otherwise.

> Having done that adding an autofs configuration option to not use IPv4
> addresses for hosts with multiple addresses should be sufficient to
> force IPv6 only use. Also not difficult but I Think will have some
> special cases to consider. I didn't mention this before because I
> thought the first thing to resolve was address selection.
For me the actual selection of an existing ULA above IPv4 and GUA is 
"the big thing". 

If I can pass the right IPv6 address via an executable map bypassing 
autofs's own selection mechanims (Why should it use that mechanism if it 
gets an IP instead of a hostname ? Would it do it ? Have to try.) I guess that 
is all I really need, as an alternative (or kludge) to changing the selection 
code. 

> 
> As I say, I'll need to re-read what we have said and compare again to
> the rfc ordering to work out if that is close to what you need but I had
> the impression it was.
I am now also thinking it is.

I will do the tests I promised (IPv6 address in map, with and without the binaries
form your ppa) in the meantime.

> The remaining problem is that autofs will remove host addresses from the
> list if it thinks they are not responding.
That is a new one to me. You see how ignorant I unfortunately am of what is
actually happening in the machinery used to mount. 

> Availability really must be checked because trying to mount from a host
> that isn't responding can result in lengthy delays which is really bad
> for interactive use.

> 
> This is the main reason for lengthy waits, the kernel RPC can't afford
> to give up just because an RPC call takes longer than expected for fear
> of causing file system corruption for NFS mounts.
I never had to think about this, noticing only the final result which
simply works (thanks to all people who developed the software involved). 
If the server is really down it does not work of course, but with current 
kernels and soft mounts for non-homes that is only a minor problem (as 
opposed to the lockups with nfs2/3 on 2.x kernels), provided that it is not 
the users home which is gone :-)

I use autofs primarily just as an abstraction layer when moving data around, 
not for failover or similar.  

> 
> But having said that, the availability check can be bypassed, if
> required, by setting single autofs configuration option.
Which option would that be ? I cannot find it ?

> 
> Umm .. am I in close to what we've discussed or have I misunderstood?
Today I think you are.

> 
> > having the possibility to pass IPv6 addresses as a result of an
> > exectutable map lookup (as is possible with IPv4 adresses) is what I
> > really 
> > need.  I assume these two might be easier to do ? If I can pass IPv6
> > addresses 
> > from the exectuable map I can shell script what I think I need
> > myselves. Of
> > course I still have to check if passing IPv6 is actually not possible
> > as
> > I speculated earlier.
> 
> Right, but AFAIK (and there might be problems with it) I think that is a
> mater of address specification.
> 
> The brackets vs. without brackets usage for example.
Even nfs(5) and exports(5) disagree. Pick one and document the
choice you like ? 

> There's also the question of whether autofs handles link local addresses
> properly, I think they can (or are required to) specify an interface as
> well.
Due to the fact that a host might present the same fe80:: address
on multiple or all of its interfaces most of all times an interface
specifier must be included so that routing selects the right link. Quite some
tools and programs are broken or confusing from a users perspective handling 
this [1,2].

I understand that people will run autofs on link local addresses, though. 
ping6(8) and nfs(5) suggest "%interface" (with out without square brackets).
So if the syntax is documented it should be less of a problem.

That is why I ignored link local in my previous mail, they should be
left alone (reserved for internal use by the network stack) IMO. You can
use ULAs if you do not have/want GUAs, that is one of their legitimate use 
cases IMO ! 

> 
> I haven't yet had a chance to study the logs you sent so I may get more
> information from them. Was there an example of this in those?
Only demonstrating the nfs(5) square bracket convention calling mount and no
brackets in other debug output:

Apr 27 16:27:21 core400 automount[2764]: get_nfs_info: called with host
core330(2001:638:708:1261:2000::118) proto 6 version 0x40
...
Apr 27 16:28:00 core400 automount[2764]: mount_mount: mount(nfs):
calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev
[2001:638:708:1261:2000::118]:/locals /local/core330
...
Apr 27 16:28:00 core400 automount[2764]: mount_mount: mount(nfs):
mounted [2001:638:708:1261:2000::118]:/locals on /local/core330
> 
> I'm going to stop here because I think sorting out these two things
> needs to be done before re-assessing what the remaining situation is.
> 

I will do the tests, rethink my approach  and try to understand mounts
behaviour then. 

Thank you very much !


Best Regards

Christof

[1]
https://blogs.gentoo.org/eva/2010/12/17/things-you-didnt-known-about-ipv6-link-local-address/
https://bugzilla.redhat.com/show_bug.cgi?id=136852
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=700999
http://forums.mozillazine.org/viewtopic.php?f=38&t=513822&start=0

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-30 11:36                                         ` Christof Koehler
@ 2016-04-30 15:15                                           ` Christof Koehler
  2016-04-30 15:16                                           ` Christof Koehler
  2016-05-02  6:01                                           ` Ian Kent
  2 siblings, 0 replies; 49+ messages in thread
From: Christof Koehler @ 2016-04-30 15:15 UTC (permalink / raw)
  To: Ian Kent; +Cc: autofs

Hello,

Sorry for replying to myself again ... but this is important.

> 
> That is why I tried to explain the big picture. For me it is important that it 
> consistently selects the ULA unless specifically instructed to do otherwise.
I used this small program
http://www.logix.cz/michal/devel/various/getaddrinfo.c.xp
to check how the getaddrinfo output is actually sorted. The result (several runs
not shown) on the vm with ubuntu 16.04 is

tckoe@core400:~$ ./a.out core330.bccms.uni-bremen.de
Host: core330.bccms.uni-bremen.de
IPv6 address: fd5f:852:a27c:1261:2000::118 (core330.bccms.uni-bremen.de)
IPv6 address: 2001:638:708:1261:2000::118 ((null))
IPv4 address: 192.168.220.118 ((null))
tckoe@core400:~$ ./a.out core330.bccms.uni-bremen.de
Host: core330.bccms.uni-bremen.de
IPv6 address: 2001:638:708:1261:2000::118 (core330.bccms.uni-bremen.de)
IPv6 address: fd5f:852:a27c:1261:2000::118 ((null))
IPv4 address: 192.168.220.118 ((null))

So clearly IPv6 is sorted before IPv4 (40 program runs) but the ordering of 
IPv6 addresses is not guaranteed. My basic assumption was wrong in that
respect. getaddrinfo does not reliably prefer the ULA if left alone. 

Adding "precedence  fc00::/7      45" to /etc/gai.conf (destination
address selection rule) fixes checking with 20 tests the order in the getaddrinfo 
output. Source address selection ("ip addrlabel") does not come into
this.

To conclude: Even without the response time considerations using pure
getaddrinfo output would not have done what I wanted. If I had not
noticed mount behaving against my expectations I might not have tested
this.

Second, I modified the map with an square bracketed IPv6 address instead 
of a hostname. Using my own autofs package recompiled against libtirpc. 
Trying to mount I see that passing the IPv6 address to autofs in fact
appears to work now, it mounts as expected. See attached log fragment.
Response time checks do not appear to be actually used which is reasonable.
So the idea that there was a problem was probably an artefact of using that
autofs binary without libtiprc on 14.04 , too.

Appart from somehow dealing with the general question of perhaps
making libtirpc a requirement the only request remaining from my
perspective is:
- Would it be possible to document the libtirpc requirement for IPv6 in
  the man page along with a warning about autofs's current behaviour 
  without libtirpc if IPv6 is available ? That might also warn
  distro maintainers.

I think everything is else is settled then. Thank you very much for you
patients and help. And my appologies for perhaps wasting some of your
time, I should have checked getaddrinfo's output directly when you mentioned it. 

I will try the binaries from your ppa next week to see if anything changes.
If you would like to test other things a will of course try to help with
that, too.

Still, I may be a able to save you a bit of time looking up the
interface specifier conventions and requirements for link local
addresses if you still need that information.

getaddrinfo(3) reads  "getaddrinfo() supports the address%scope-id notation for 
specifying the IPv6 scope-ID". Here scope-ID can mean interface as in
"%eth0" for link local addresses. So the syntax is quite universal.
Confirm RFC 4007 Section 11 on which this is probably based.

Making an interface specifier a required address component when using link local 
addresses would be consistent with the example in nfs(5) 
"This  example  shows  how to mount an NFS server using a raw IPv6 link-
 local address.
 [fe80::215:c5ff:fb3e:e2b1%eth0]:/export /mnt nfs defaults 0 0"
which I would read to imply that %interface must always be supplied with
link local addresses, although it does not say so directly in the text.

This requirement would also be conforming to RFC 4007, e.g. Section 11

"
As already mentioned, to specify an IPv6 non-global address without
   ambiguity, an intended scope zone should be specified as well.  As a
   common notation to specify the scope zone, an implementation SHOULD
   support the following format:
            <address>%<zone_id>

...

An implementation could also use interface names as <zone_id> for
   scopes larger than links, but there might be some confusion in this
   use. 
"



Have a nice Weekend, thank you very much for all your effort and again my 
apologies for checking some thinks too late !


Best Regards

Christof

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-30 11:36                                         ` Christof Koehler
  2016-04-30 15:15                                           ` Christof Koehler
@ 2016-04-30 15:16                                           ` Christof Koehler
  2016-05-02  6:01                                           ` Ian Kent
  2 siblings, 0 replies; 49+ messages in thread
From: Christof Koehler @ 2016-04-30 15:16 UTC (permalink / raw)
  To: Ian Kent; +Cc: autofs

[-- Attachment #1: Type: text/plain, Size: 9265 bytes --]

And here is the log.

On Sat, Apr 30, 2016 at 01:36:00PM +0200, Christof Koehler wrote:
> Hello.
> 
> On Sat, Apr 30, 2016 at 11:21:11AM +0800, Ian Kent wrote:
> > On Fri, 2016-04-29 at 16:10 +0200, Christof Koehler wrote:
> > > Hello,
> > > 
> > > > 
> > > > Would that approach help with what you're trying to achieve?
> > > > 
> > > 
> > > I am not sure of anything right now anymore after noticing what mount 
> > > does. On top, I am not sure if I understand what you are proposing :-)
> > 
> > I'll need to review what we've said already for the problem with
> > mount.nfs(8).
> Just to be clear: I am not refering to the fallback to IPv4, that has
> indeed been covered, see below. I am refering to its stand alone, i.e. called
> by hand on the command line, behaviour to alternate between available IPv6 
> addresses. I did not observe this before and it makes me unsure if my
> initial request for autofs's behaviour might have been unreasonable.
> 
> > > 
> > > Independent of that maybe a) fixing the situation where
> > > autofs/mount falls back to IPv4, which I understand is a bug, and b)
> > 
> > Partly, the situation I described so far is, IPv6 won't work at all if
> > not using libtirpc, that's fixed by building autofs with libtirpc
> > support, no big deal there, we have that already.
> > 
> > I believe we originally started talking about IPv4 fall back because of
> > the lack of libtirpc support in Ubuntu autofs.
> Yes. 
> 
> Eventually the current behaviour could be documented as "working for
> now, not guaranteed to work at all without libtirpc in future releases" 
> as a compromise ?
> 
> It would have been helpful if the man page would have included a hint that
> libtirpc is a prerequisite for IPv6, similar to its mentioning in nfs(5) 
> (search "TI-RPC").
> 
> > There was also uncertainty about how mount.nfs(8) ends with an address.
> > Lets leave that alone for the moment.
> This is clearly not an autofs issue. I will probably ask
> somewhere else if I cannot find an answer myselves.
> 
> mount.nfs4(8) does not mentions IPv6 at all, and nfs(5) does not give
> any information about a possible address selection process.
> 
> > 
> > So I went looking and found that the order of addresses returned by
> > getaddrinfo(3) is in fact a modified rfc 3484 ordering.
> > 
> > I then responded saying this ordering may be sufficient, and a fairly
> > simple way to implement the needed address selection (and I mostly
> > ignored IPv4 with this suggestion).
> My problem is that I did/do not fully understand the details of the
> sorting process you decribed. Now just reading getaddrinfo(3) saying "The  
> sorting  function  used  within getaddrinfo()  is defined  in  RFC 3484" I would
> say that the list as returned by getaddrinfo should be what I wanted (I am not 
> sure what you mean with "modified", the tweaking of gai.conf ?).
> 
> That is why I tried to explain the big picture. For me it is important that it 
> consistently selects the ULA unless specifically instructed to do otherwise.
> 
> > Having done that adding an autofs configuration option to not use IPv4
> > addresses for hosts with multiple addresses should be sufficient to
> > force IPv6 only use. Also not difficult but I Think will have some
> > special cases to consider. I didn't mention this before because I
> > thought the first thing to resolve was address selection.
> For me the actual selection of an existing ULA above IPv4 and GUA is 
> "the big thing". 
> 
> If I can pass the right IPv6 address via an executable map bypassing 
> autofs's own selection mechanims (Why should it use that mechanism if it 
> gets an IP instead of a hostname ? Would it do it ? Have to try.) I guess that 
> is all I really need, as an alternative (or kludge) to changing the selection 
> code. 
> 
> > 
> > As I say, I'll need to re-read what we have said and compare again to
> > the rfc ordering to work out if that is close to what you need but I had
> > the impression it was.
> I am now also thinking it is.
> 
> I will do the tests I promised (IPv6 address in map, with and without the binaries
> form your ppa) in the meantime.
> 
> > The remaining problem is that autofs will remove host addresses from the
> > list if it thinks they are not responding.
> That is a new one to me. You see how ignorant I unfortunately am of what is
> actually happening in the machinery used to mount. 
> 
> > Availability really must be checked because trying to mount from a host
> > that isn't responding can result in lengthy delays which is really bad
> > for interactive use.
> 
> > 
> > This is the main reason for lengthy waits, the kernel RPC can't afford
> > to give up just because an RPC call takes longer than expected for fear
> > of causing file system corruption for NFS mounts.
> I never had to think about this, noticing only the final result which
> simply works (thanks to all people who developed the software involved). 
> If the server is really down it does not work of course, but with current 
> kernels and soft mounts for non-homes that is only a minor problem (as 
> opposed to the lockups with nfs2/3 on 2.x kernels), provided that it is not 
> the users home which is gone :-)
> 
> I use autofs primarily just as an abstraction layer when moving data around, 
> not for failover or similar.  
> 
> > 
> > But having said that, the availability check can be bypassed, if
> > required, by setting single autofs configuration option.
> Which option would that be ? I cannot find it ?
> 
> > 
> > Umm .. am I in close to what we've discussed or have I misunderstood?
> Today I think you are.
> 
> > 
> > > having the possibility to pass IPv6 addresses as a result of an
> > > exectutable map lookup (as is possible with IPv4 adresses) is what I
> > > really 
> > > need.  I assume these two might be easier to do ? If I can pass IPv6
> > > addresses 
> > > from the exectuable map I can shell script what I think I need
> > > myselves. Of
> > > course I still have to check if passing IPv6 is actually not possible
> > > as
> > > I speculated earlier.
> > 
> > Right, but AFAIK (and there might be problems with it) I think that is a
> > mater of address specification.
> > 
> > The brackets vs. without brackets usage for example.
> Even nfs(5) and exports(5) disagree. Pick one and document the
> choice you like ? 
> 
> > There's also the question of whether autofs handles link local addresses
> > properly, I think they can (or are required to) specify an interface as
> > well.
> Due to the fact that a host might present the same fe80:: address
> on multiple or all of its interfaces most of all times an interface
> specifier must be included so that routing selects the right link. Quite some
> tools and programs are broken or confusing from a users perspective handling 
> this [1,2].
> 
> I understand that people will run autofs on link local addresses, though. 
> ping6(8) and nfs(5) suggest "%interface" (with out without square brackets).
> So if the syntax is documented it should be less of a problem.
> 
> That is why I ignored link local in my previous mail, they should be
> left alone (reserved for internal use by the network stack) IMO. You can
> use ULAs if you do not have/want GUAs, that is one of their legitimate use 
> cases IMO ! 
> 
> > 
> > I haven't yet had a chance to study the logs you sent so I may get more
> > information from them. Was there an example of this in those?
> Only demonstrating the nfs(5) square bracket convention calling mount and no
> brackets in other debug output:
> 
> Apr 27 16:27:21 core400 automount[2764]: get_nfs_info: called with host
> core330(2001:638:708:1261:2000::118) proto 6 version 0x40
> ...
> Apr 27 16:28:00 core400 automount[2764]: mount_mount: mount(nfs):
> calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev
> [2001:638:708:1261:2000::118]:/locals /local/core330
> ...
> Apr 27 16:28:00 core400 automount[2764]: mount_mount: mount(nfs):
> mounted [2001:638:708:1261:2000::118]:/locals on /local/core330
> > 
> > I'm going to stop here because I think sorting out these two things
> > needs to be done before re-assessing what the remaining situation is.
> > 
> 
> I will do the tests, rethink my approach  and try to understand mounts
> behaviour then. 
> 
> Thank you very much !
> 
> 
> Best Regards
> 
> Christof
> 
> [1]
> https://blogs.gentoo.org/eva/2010/12/17/things-you-didnt-known-about-ipv6-link-local-address/
> https://bugzilla.redhat.com/show_bug.cgi?id=136852
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=700999
> http://forums.mozillazine.org/viewtopic.php?f=38&t=513822&start=0
> 
> -- 
> Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
> Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> 28359 Bremen  
> 
> PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> --
> To unsubscribe from this list: send the line "unsubscribe autofs" in

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/

[-- Attachment #2: pp.gz --]
[-- Type: application/octet-stream, Size: 852 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-04-30 11:36                                         ` Christof Koehler
  2016-04-30 15:15                                           ` Christof Koehler
  2016-04-30 15:16                                           ` Christof Koehler
@ 2016-05-02  6:01                                           ` Ian Kent
  2016-05-02 16:08                                             ` Christof Koehler
  2 siblings, 1 reply; 49+ messages in thread
From: Ian Kent @ 2016-05-02  6:01 UTC (permalink / raw)
  To: christof.koehler; +Cc: autofs

On Sat, 2016-04-30 at 13:36 +0200, Christof Koehler wrote:
> Hello.
> 
> On Sat, Apr 30, 2016 at 11:21:11AM +0800, Ian Kent wrote:
> > On Fri, 2016-04-29 at 16:10 +0200, Christof Koehler wrote:
> > > Hello,
> > > 
> > > > 
> > > > Would that approach help with what you're trying to achieve?
> > > > 
> > > 
> > > I am not sure of anything right now anymore after noticing what
> > > mount 
> > > does. On top, I am not sure if I understand what you are proposing
> > > :-)
> > 
> > I'll need to review what we've said already for the problem with
> > mount.nfs(8).
> Just to be clear: I am not refering to the fallback to IPv4, that has
> indeed been covered, see below. I am refering to its stand alone, i.e.
> called
> by hand on the command line, behaviour to alternate between available
> IPv6 
> addresses. I did not observe this before and it makes me unsure if my
> initial request for autofs's behaviour might have been unreasonable.
> 
> > > 
> > > Independent of that maybe a) fixing the situation where
> > > autofs/mount falls back to IPv4, which I understand is a bug, and
> > > b)
> > 
> > Partly, the situation I described so far is, IPv6 won't work at all
> > if
> > not using libtirpc, that's fixed by building autofs with libtirpc
> > support, no big deal there, we have that already.
> > 
> > I believe we originally started talking about IPv4 fall back because
> > of
> > the lack of libtirpc support in Ubuntu autofs.
> Yes. 
> 
> Eventually the current behaviour could be documented as "working for
> now, not guaranteed to work at all without libtirpc in future
> releases" 
> as a compromise ?

I'll update the man pages to talk about libtirpc.
In particular I'll say that libtirpc is required for IPv6.

I think the address order change I mentioned is needed regardless
because the implementation I have is insufficient.

I didn't have time and didn't go looking for more information when I
implemented it (which was quite a while ago now), assuming I would
update it when it started being used based on discussion and
investigation at that time (and it seems it is that time now).

> 
> It would have been helpful if the man page would have included a hint
> that
> libtirpc is a prerequisite for IPv6, similar to its mentioning in
> nfs(5) 
> (search "TI-RPC").

Yes, that detail was lost in the effort of implementation, sadly many
things can get missed due to this.

But IPv6 wasn't the only reason for changing to libtirpc.
The glibc RPC code was to be deprecated and (I believe) the glibc
maintainers didn't want to continue to maintain it.

> 
> > There was also uncertainty about how mount.nfs(8) ends with an
> > address.
> > Lets leave that alone for the moment.
> This is clearly not an autofs issue. I will probably ask
> somewhere else if I cannot find an answer myselves.
> 
> mount.nfs4(8) does not mentions IPv6 at all, and nfs(5) does not give
> any information about a possible address selection process.

And, after having a quick look I see that it doesn't do anything special
wrt. address selection.

I only looked at code concerned with option strings (interpret as option
strings that are passed directly to the kernel at mount, the most common
method used by mount.nfs(8), even for not so new kernels nowadays) and
it just uses the first address in the list returned by getaddrinfo(3).

I checked nfs-utils versions 1.3.3 and 1.2.6, they were essentially the
same in this respect.

What I didn't see is trying different addresses over retry on failure
but mount.nfs(8) has never claimed to do that.

> 
> > 
> > So I went looking and found that the order of addresses returned by
> > getaddrinfo(3) is in fact a modified rfc 3484 ordering.
> > 
> > I then responded saying this ordering may be sufficient, and a
> > fairly
> > simple way to implement the needed address selection (and I mostly
> > ignored IPv4 with this suggestion).
> My problem is that I did/do not fully understand the details of the
> sorting process you decribed. Now just reading getaddrinfo(3) saying
> "The  
> sorting  function  used  within getaddrinfo()  is defined  in  RFC
> 3484" I would
> say that the list as returned by getaddrinfo should be what I wanted
> (I am not 
> sure what you mean with "modified", the tweaking of gai.conf ?).

I arrived at this by, looking again at the man page, going to the glibc
source for getaddrinfo(3), and checking results of a Google search.

The glibc source clearly uses a function with rfc 3484 as part of its
name so I assume it tries to implement that selection sorting procedure.

I also saw a few mailing list posts that implied the selection had been
modified over time and there was mention of rfc 6724 in some posts.

So that lead me to think that minor changes had been made due to real
world situations and that, since rfc 6724 had come up, and the glibc
folks are standards aware, that some changes had been made based on that
too.

Just what is actually implemented is a somewhat larger task that I'd
rather not pursue, ;)
 
> 
> That is why I tried to explain the big picture. For me it is important
> that it 
> consistently selects the ULA unless specifically instructed to do
> otherwise.
> 
> > Having done that adding an autofs configuration option to not use
> > IPv4
> > addresses for hosts with multiple addresses should be sufficient to
> > force IPv6 only use. Also not difficult but I Think will have some
> > special cases to consider. I didn't mention this before because I
> > thought the first thing to resolve was address selection.
> For me the actual selection of an existing ULA above IPv4 and GUA is 
> "the big thing". 
> 
> If I can pass the right IPv6 address via an executable map bypassing 
> autofs's own selection mechanims (Why should it use that mechanism if
> it 
> gets an IP instead of a hostname ? Would it do it ? Have to try.) I
> guess that 
> is all I really need, as an alternative (or kludge) to changing the
> selection 
> code. 

I can help with that if there are any problems.

I don't think I require square brackets on IPv6 addresses but they
shouldn't cause the mount to fail either, if they are present.

It's a little hard to work out actually because the parsing code for sun
format mount maps (but not the master map) is in several different
locations (a background task I have is to change this to a YACC parser
and do almost all the parsing in one place).

I do add them for utilities that need them if they aren't present, such
as mount.

That might need to change for the sake of consistency, not sure.

If an IP address is used then the mount "host" doesn't have multiple
addresses as far as autofs is concerned but autofs will still check the
host is responding to avoid lengthy delays (but see below).

> 
> > 
> > As I say, I'll need to re-read what we have said and compare again
> > to
> > the rfc ordering to work out if that is close to what you need but I
> > had
> > the impression it was.
> I am now also thinking it is.
> 
> I will do the tests I promised (IPv6 address in map, with and without
> the binaries
> form your ppa) in the meantime.
> 
> > The remaining problem is that autofs will remove host addresses from
> > the
> > list if it thinks they are not responding.
> That is a new one to me. You see how ignorant I unfortunately am of
> what is
> actually happening in the machinery used to mount. 

LOL, right, but that's a direct consequence of the lengthy delays I've
spoken about.

The list is constructed on the fly at mount time and those that don't
respond are removed leaving only hosts (or addresses) that need a mount
to be attempted.

That's why you see the "no hosts available" message sometimes which has
been a problem at times as it tends to appear when things aren't working
as needed or expected.

There's no monitor thread that updates previously seen hosts
availability (as there is in amd, but that application will probably
never support IPv6).

> > Availability really must be checked because trying to mount from a
> > host
> > that isn't responding can result in lengthy delays which is really
> > bad
> > for interactive use.
> 
> > 
> > This is the main reason for lengthy waits, the kernel RPC can't
> > afford
> > to give up just because an RPC call takes longer than expected for
> > fear
> > of causing file system corruption for NFS mounts.
> I never had to think about this, noticing only the final result which
> simply works (thanks to all people who developed the software
> involved). 
> If the server is really down it does not work of course, but with
> current 
> kernels and soft mounts for non-homes that is only a minor problem (as
> opposed to the lockups with nfs2/3 on 2.x kernels), provided that it
> is not 
> the users home which is gone :-)
> 
> I use autofs primarily just as an abstraction layer when moving data
> around, 
> not for failover or similar.  

Right.

> > 
> > But having said that, the availability check can be bypassed, if
> > required, by setting single autofs configuration option.
> Which option would that be ? I cannot find it ?

I was going to say, if you specify a sensible value for mount_wait the
probing won't be done, but I see in the code that's not quite true. If
we have more than one non-local address in the list the probe may still
get done.

But setting a value for mount_wait other than the default of -1 and
ensuring there is only one value in the list of hosts (such as when
using an address) the probe will not be done. Otherwise the probe is
done.

As can be seen here:
        /*
         * Check for either a list containing only proximity local hosts
         * or a single host entry whose proximity isn't local. If so
         * return immediately as we don't want to add probe latency for
         * the common case of a single filesystem mount request.
         *
         * But, if the kernel understands text nfs mount options then
         * mount.nfs most likely bypasses its probing and lets the kernel
         * do all the work. This can lead to long timeouts for hosts that
         * are not available so check the kernel version and mount.nfs
         * version and probe singleton mounts if the kernel version is
         * greater than 2.6.22 and mount.nfs version is greater than 1.1.1.
         * But also allow the MOUNT_WAIT configuration parameter to override
         * the probing.
         */
        if (nfs_mount_uses_string_options &&
            defaults_get_mount_wait() == -1 &&
           (kern_vers = linux_version_code()) > KERNEL_VERSION(2, 6, 22)) {
                if (!this)
                        return 1;
        } else {
                if (!this || !this->next)
                        return 1;
        }

But since a patch in the Debian autofs package changes the kernel
version check code (removes it) I will need to check the maintainer
patch doesn't interfere with this case.

> 
> > 
> > Umm .. am I in close to what we've discussed or have I
> > misunderstood?
> Today I think you are.
> 
> > 
> > > having the possibility to pass IPv6 addresses as a result of an
> > > exectutable map lookup (as is possible with IPv4 adresses) is what
> > > I
> > > really 
> > > need.  I assume these two might be easier to do ? If I can pass
> > > IPv6
> > > addresses 
> > > from the exectuable map I can shell script what I think I need
> > > myselves. Of
> > > course I still have to check if passing IPv6 is actually not
> > > possible
> > > as
> > > I speculated earlier.
> > 
> > Right, but AFAIK (and there might be problems with it) I think that
> > is a
> > mater of address specification.
> > 
> > The brackets vs. without brackets usage for example.
> Even nfs(5) and exports(5) disagree. Pick one and document the
> choice you like ? 
> 
> > There's also the question of whether autofs handles link local
> > addresses
> > properly, I think they can (or are required to) specify an interface
> > as
> > well.
> Due to the fact that a host might present the same fe80:: address
> on multiple or all of its interfaces most of all times an interface
> specifier must be included so that routing selects the right link.
> Quite some
> tools and programs are broken or confusing from a users perspective
> handling 
> this [1,2].
> 
> I understand that people will run autofs on link local addresses,
> though. 
> ping6(8) and nfs(5) suggest "%interface" (with out without square
> brackets).
> So if the syntax is documented it should be less of a problem.
> 
> That is why I ignored link local in my previous mail, they should be
> left alone (reserved for internal use by the network stack) IMO. You
> can
> use ULAs if you do not have/want GUAs, that is one of their legitimate
> use 
> cases IMO ! 
> 
> > 
> > I haven't yet had a chance to study the logs you sent so I may get
> > more
> > information from them. Was there an example of this in those?
> Only demonstrating the nfs(5) square bracket convention calling mount
> and no
> brackets in other debug output:
> 
> Apr 27 16:27:21 core400 automount[2764]: get_nfs_info: called with
> host
> core330(2001:638:708:1261:2000::118) proto 6 version 0x40
> ...
> Apr 27 16:28:00 core400 automount[2764]: mount_mount: mount(nfs):
> calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev
> [2001:638:708:1261:2000::118]:/locals /local/core330
> ...
> Apr 27 16:28:00 core400 automount[2764]: mount_mount: mount(nfs):
> mounted [2001:638:708:1261:2000::118]:/locals on /local/core330
> > 
> > I'm going to stop here because I think sorting out these two things
> > needs to be done before re-assessing what the remaining situation
> > is.
> > 
> 
> I will do the tests, rethink my approach  and try to understand mounts
> behaviour then. 
> 
> Thank you very much !
> 
> 
> Best Regards
> 
> Christof
> 
> [1]
> https://blogs.gentoo.org/eva/2010/12/17/things-you-didnt-known-about-i
> pv6-link-local-address/
> https://bugzilla.redhat.com/show_bug.cgi?id=136852
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=700999
> http://forums.mozillazine.org/viewtopic.php?f=38&t=513822&start=0
> 
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-05-02  6:01                                           ` Ian Kent
@ 2016-05-02 16:08                                             ` Christof Koehler
  2016-05-03  7:58                                               ` Ian Kent
  0 siblings, 1 reply; 49+ messages in thread
From: Christof Koehler @ 2016-05-02 16:08 UTC (permalink / raw)
  To: Ian Kent; +Cc: autofs

[-- Attachment #1: Type: text/plain, Size: 5740 bytes --]

Hello,

I did three tests now, each with my self-built packages and the ones
from tour ppa:

1. multiple IPv6 addresses
2. square bracketed IPv6 address from map
3. new situation, "failover": the server has several IPv6 and one IPv4
   address, but the server exports only to one IPv6 address

There does not appear to be any difference depending on the package versions
used. Log output for each using the packages from your ppa is attached.

In test 1 one address is selected based on response time, in tests 2
there is a successfull mount using the specified address and in the new
test 3 autofs starts with the address with the lowest response time but
once it fails it proceeds trying the others. No user noticeable
delays.

Especially this failover behaviour is very nice. Effectively this makes
steering the address selection towards what I want from a client side
into a server side problem ! I can simply export just to ULAs and autofs
figures this out for me. Is that correct ? If so, this is all I ever
wanted, I can put simply a readable hostname in the map and use the
server's /etc/exports to fix the address. Sorry for not making the connection 
earlier.

On Mon, May 02, 2016 at 02:01:42PM +0800, Ian Kent wrote:
> On Sat, 2016-04-30 at 13:36 +0200, Christof Koehler wrote:
> 
> I'll update the man pages to talk about libtirpc.
> In particular I'll say that libtirpc is required for IPv6.
Thank you. Eventually this will also alert distro maintainers that their
build should change, autofs with the same libtirpc requirement than
nfs-common. May be it goes into debian testing in time for
ubuntu 18.04 LTS :-)

But for now rebuilding appears to work anyway. I should update the bug
report in the ubuntu tracker.

> What I didn't see is trying different addresses over retry on failure
> but mount.nfs(8) has never claimed to do that.

And on top the failover feature in autofs does what I wanted in the
first place. I only was not conciously aware of it.

> I also saw a few mailing list posts that implied the selection had been
> modified over time and there was mention of rfc 6724 in some posts.
I belive the difference is primarily that some address classes were
deprecated in the real world between, see
https://tools.ietf.org/html/rfc6724#appendix-B

> 
> I can help with that if there are any problems.

Works apparently now with test 2 and 3 above. So no changes needed if 2
and 3 are the desired behaviour from your side ! Thank you for your
patience.

> 
> I don't think I require square brackets on IPv6 addresses but they
> shouldn't cause the mount to fail either, if they are present.
Well, apparently the string from the map is passed verbatim to mount.
Without brackets mount is unhappy if I remember my own testing correctly, 
confirm also nfs(5). According to nfs(5) mount needs them.

autofs and mount are both client side programs. So requiring square
brackets appears to be a reasonable convention. /etc/exports is server side 
and does not allow square brackets. I could live with that.

> 
> It's a little hard to work out actually because the parsing code for sun
> format mount maps (but not the master map) is in several different
> locations (a background task I have is to change this to a YACC parser
> and do almost all the parsing in one place).
That I do not know. I see the problem.

But may be related to this:
We are still using NIS and actually we had the autofs map in  NIS
for years. 
If you want I can try what happens if I put an IPv6 address into one of
our nis maps (reactivate one of the autofs maps). The server is a debian 
oldstable.

I removed the autofs maps from nis because I am not sure if NIS is 
supported over IPv6 and so might not be a viable option if we have IPv6
only machines as mentioned earlier. I am looking at openldap/kerberos 
every few months trying to convince myselves that moving to it is worth 
the effort compared to keeping everything in ansible managed local files.

There is http://www.linux-nis.org/nis-ipv6/, but a
ldd /usr/sbin/ypbind|grep rpc comes up empty.

> The list is constructed on the fly at mount time and those that don't
> respond are removed leaving only hosts (or addresses) that need a mount
> to be attempted.
That is test 3 above, right ? Appears to work fine. Thank you for
bringing this up, I got the idea then.

> 
> availability (as there is in amd, but that application will probably
> never support IPv6).
Reminds me:
We used am-utils back around 1998, because we had DEC OSF/1 aka 
True64 on Alpha 21164 based workstations. When we moved to autofs (linux only 
then) about 9 years ago we never looked back. But great CPUs for number crunching.

> 
> > Which option would that be ? I cannot find it ?
> 
> I was going to say, if you specify a sensible value for mount_wait the
Ah, OK !



With the above said: I tried to find out where my bad assumption
regarding the ordering of the list returned by getaddrinfo came from. I
currently believe it was like this.

First I prevented the use of privacy addresses by adding a label to the
source selection list. Doing that I noted that there already was a label
for ULAs. That was the red herring, because after that I completely
ignored the destination address selection step which is the important
step here and for which there is no corresponding preference value in
/etc/gai.conf. If it was just source selection involved my assumtion had
been right ... I believe :-)

Best Regards

Christof

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/

[-- Attachment #2: log1.txt --]
[-- Type: text/plain, Size: 27486 bytes --]

May  2 16:57:35 core400 autofs[10603]:    ...done.
May  2 16:57:35 core400 systemd[1]: Started LSB: Automounts filesystems on demand.
May  2 16:57:42 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:57:42 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:57:42 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 0
May  2 16:57:42 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:57:42 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:57:45 core400 automount[10615]: handle_packet: type = 3
May  2 16:57:45 core400 automount[10615]: handle_packet_missing_indirect: token 5, name core330, request pid 10632
May  2 16:57:45 core400 automount[10615]: attempting to mount entry /local/core330
May  2 16:57:45 core400 automount[10615]: lookup_mount: lookup(program): looking up core330
May  2 16:57:45 core400 automount[10615]: lookup_mount: lookup(program): core330 -> -fstype=nfs4,rw,intr,nosuid,soft,nodev core330:/locals
May  2 16:57:45 core400 automount[10615]: parse_mount: parse(sun): expanded entry: -fstype=nfs4,rw,intr,nosuid,soft,nodev core330:/locals
May  2 16:57:45 core400 automount[10615]: parse_mount: parse(sun): gathered options: fstype=nfs4,rw,intr,nosuid,soft,nodev
May  2 16:57:45 core400 automount[10615]: parse_mount: parse(sun): dequote("core330:/locals") -> core330:/locals
May  2 16:57:45 core400 automount[10615]: parse_mount: parse(sun): core of entry: options=fstype=nfs4,rw,intr,nosuid,soft,nodev, loc=core330:/locals
May  2 16:57:45 core400 automount[10615]: sun_mount: parse(sun): mounting root /local, mountpoint core330, what core330:/locals, fstype nfs4, options rw,intr,nosuid,soft,nodev
May  2 16:57:45 core400 automount[10615]: mount_mount: mount(nfs): root=/local name=core330 what=core330:/locals, fstype=nfs4, options=rw,intr,nosuid,soft,nodev
May  2 16:57:45 core400 automount[10615]: mount_mount: mount(nfs): nfs options="rw,intr,nosuid,soft,nodev", nobind=0, nosymlink=0, ro=0
May  2 16:57:45 core400 automount[10615]: get_nfs_info: called with host core330(192.168.220.118) proto 6 version 0x40
May  2 16:57:45 core400 automount[10615]: get_nfs_info: nfs v4 rpc ping time: 0.000113
May  2 16:57:45 core400 automount[10615]: get_nfs_info: host core330 cost 113 weight 0
May  2 16:57:45 core400 automount[10615]: get_nfs_info: called with host core330(fd5f:852:a27c:1261:2000::118) proto 6 version 0x40
May  2 16:57:45 core400 automount[10615]: get_nfs_info: nfs v4 rpc ping time: 0.000140
May  2 16:57:45 core400 automount[10615]: get_nfs_info: host core330 cost 139 weight 0
May  2 16:57:45 core400 automount[10615]: get_nfs_info: called with host core330(2001:638:708:1261:2000::118) proto 6 version 0x40
May  2 16:57:45 core400 automount[10615]: get_nfs_info: nfs v4 rpc ping time: 0.000186
May  2 16:57:45 core400 automount[10615]: get_nfs_info: host core330 cost 185 weight 0
May  2 16:57:45 core400 automount[10615]: prune_host_list: selected subset of hosts that support NFS4 over TCP
May  2 16:57:45 core400 automount[10615]: mount_mount: mount(nfs): calling mkdir_path /local/core330
May  2 16:57:45 core400 automount[10615]: mount_mount: mount(nfs): calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev 192.168.220.118:/locals /local/core330
May  2 16:57:45 core400 automount[10615]: mount_mount: mount(nfs): mounted 192.168.220.118:/locals on /local/core330
May  2 16:57:45 core400 automount[10615]: dev_ioctl_send_ready: token = 5
May  2 16:57:45 core400 automount[10615]: mounted /local/core330
May  2 16:57:46 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:57:46 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:57:46 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:57:46 core400 automount[10615]: 1 remaining in /local
May  2 16:57:46 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:57:46 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:57:46 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:57:50 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:57:50 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:57:50 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:57:50 core400 automount[10615]: 1 remaining in /local
May  2 16:57:50 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:57:50 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:57:50 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:57:54 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:57:54 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:57:54 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:57:54 core400 automount[10615]: 1 remaining in /local
May  2 16:57:54 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:57:54 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:57:54 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:57:58 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:57:58 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:57:58 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:57:58 core400 automount[10615]: 1 remaining in /local
May  2 16:57:58 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:57:58 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:57:58 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:58:02 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:58:02 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:58:02 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:58:02 core400 automount[10615]: handle_packet: type = 4
May  2 16:58:02 core400 automount[10615]: handle_packet_expire_indirect: token 6, name core330
May  2 16:58:02 core400 automount[10615]: expiring path /local/core330
May  2 16:58:02 core400 automount[10615]: umount_multi: path /local/core330 incl 1
May  2 16:58:02 core400 automount[10615]: umount_subtree_mounts: unmounting dir = /local/core330
May  2 16:58:02 core400 automount[10615]: rm_unwanted_fn: removing directory /local/core330
May  2 16:58:02 core400 automount[10615]: expired /local/core330
May  2 16:58:02 core400 automount[10615]: dev_ioctl_send_ready: token = 6
May  2 16:58:02 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 0
May  2 16:58:02 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:58:02 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:58:05 core400 automount[10615]: handle_packet: type = 3
May  2 16:58:05 core400 automount[10615]: handle_packet_missing_indirect: token 7, name core330, request pid 10661
May  2 16:58:05 core400 automount[10615]: attempting to mount entry /local/core330
May  2 16:58:05 core400 automount[10615]: lookup_mount: lookup(program): core330 -> -fstype=nfs4,rw,intr,nosuid,soft,nodev core330:/locals
May  2 16:58:05 core400 automount[10615]: parse_mount: parse(sun): expanded entry: -fstype=nfs4,rw,intr,nosuid,soft,nodev core330:/locals
May  2 16:58:05 core400 automount[10615]: parse_mount: parse(sun): gathered options: fstype=nfs4,rw,intr,nosuid,soft,nodev
May  2 16:58:05 core400 automount[10615]: parse_mount: parse(sun): dequote("core330:/locals") -> core330:/locals
May  2 16:58:05 core400 automount[10615]: parse_mount: parse(sun): core of entry: options=fstype=nfs4,rw,intr,nosuid,soft,nodev, loc=core330:/locals
May  2 16:58:05 core400 automount[10615]: sun_mount: parse(sun): mounting root /local, mountpoint core330, what core330:/locals, fstype nfs4, options rw,intr,nosuid,soft,nodev
May  2 16:58:05 core400 automount[10615]: mount_mount: mount(nfs): root=/local name=core330 what=core330:/locals, fstype=nfs4, options=rw,intr,nosuid,soft,nodev
May  2 16:58:05 core400 automount[10615]: mount_mount: mount(nfs): nfs options="rw,intr,nosuid,soft,nodev", nobind=0, nosymlink=0, ro=0
May  2 16:58:05 core400 automount[10615]: get_nfs_info: called with host core330(192.168.220.118) proto 6 version 0x40
May  2 16:58:05 core400 automount[10615]: get_nfs_info: nfs v4 rpc ping time: 0.000246
May  2 16:58:05 core400 automount[10615]: get_nfs_info: host core330 cost 246 weight 0
May  2 16:58:05 core400 automount[10615]: get_nfs_info: called with host core330(2001:638:708:1261:2000::118) proto 6 version 0x40
May  2 16:58:05 core400 automount[10615]: get_nfs_info: nfs v4 rpc ping time: 0.000288
May  2 16:58:05 core400 automount[10615]: get_nfs_info: host core330 cost 288 weight 0
May  2 16:58:05 core400 automount[10615]: get_nfs_info: called with host core330(fd5f:852:a27c:1261:2000::118) proto 6 version 0x40
May  2 16:58:05 core400 automount[10615]: get_nfs_info: nfs v4 rpc ping time: 0.000280
May  2 16:58:05 core400 automount[10615]: get_nfs_info: host core330 cost 279 weight 0
May  2 16:58:05 core400 automount[10615]: prune_host_list: selected subset of hosts that support NFS4 over TCP
May  2 16:58:05 core400 automount[10615]: mount_mount: mount(nfs): calling mkdir_path /local/core330
May  2 16:58:05 core400 automount[10615]: mount_mount: mount(nfs): calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev 192.168.220.118:/locals /local/core330
May  2 16:58:05 core400 automount[10615]: mount_mount: mount(nfs): mounted 192.168.220.118:/locals on /local/core330
May  2 16:58:05 core400 automount[10615]: dev_ioctl_send_ready: token = 7
May  2 16:58:05 core400 automount[10615]: mounted /local/core330
May  2 16:58:06 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:58:06 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:58:06 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:58:06 core400 automount[10615]: 1 remaining in /local
May  2 16:58:06 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:58:06 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:58:06 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:58:10 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:58:10 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:58:10 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:58:10 core400 automount[10615]: 1 remaining in /local
May  2 16:58:10 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:58:10 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:58:10 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:58:14 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:58:14 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:58:14 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:58:14 core400 automount[10615]: 1 remaining in /local
May  2 16:58:14 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:58:14 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:58:14 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:58:18 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:58:18 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:58:18 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:58:18 core400 automount[10615]: 1 remaining in /local
May  2 16:58:18 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:58:18 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:58:18 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:58:22 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:58:22 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:58:22 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:58:22 core400 automount[10615]: handle_packet: type = 4
May  2 16:58:22 core400 automount[10615]: handle_packet_expire_indirect: token 8, name core330
May  2 16:58:22 core400 automount[10615]: expiring path /local/core330
May  2 16:58:22 core400 automount[10615]: umount_multi: path /local/core330 incl 1
May  2 16:58:22 core400 automount[10615]: umount_subtree_mounts: unmounting dir = /local/core330
May  2 16:58:22 core400 automount[10615]: rm_unwanted_fn: removing directory /local/core330
May  2 16:58:22 core400 automount[10615]: expired /local/core330
May  2 16:58:22 core400 automount[10615]: dev_ioctl_send_ready: token = 8
May  2 16:58:22 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 0
May  2 16:58:22 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:58:22 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:58:24 core400 automount[10615]: handle_packet: type = 3
May  2 16:58:24 core400 automount[10615]: handle_packet_missing_indirect: token 9, name core330, request pid 10688
May  2 16:58:24 core400 automount[10615]: attempting to mount entry /local/core330
May  2 16:58:24 core400 automount[10615]: lookup_mount: lookup(program): core330 -> -fstype=nfs4,rw,intr,nosuid,soft,nodev core330:/locals
May  2 16:58:24 core400 automount[10615]: parse_mount: parse(sun): expanded entry: -fstype=nfs4,rw,intr,nosuid,soft,nodev core330:/locals
May  2 16:58:24 core400 automount[10615]: parse_mount: parse(sun): gathered options: fstype=nfs4,rw,intr,nosuid,soft,nodev
May  2 16:58:24 core400 automount[10615]: parse_mount: parse(sun): dequote("core330:/locals") -> core330:/locals
May  2 16:58:24 core400 automount[10615]: parse_mount: parse(sun): core of entry: options=fstype=nfs4,rw,intr,nosuid,soft,nodev, loc=core330:/locals
May  2 16:58:24 core400 automount[10615]: sun_mount: parse(sun): mounting root /local, mountpoint core330, what core330:/locals, fstype nfs4, options rw,intr,nosuid,soft,nodev
May  2 16:58:24 core400 automount[10615]: mount_mount: mount(nfs): root=/local name=core330 what=core330:/locals, fstype=nfs4, options=rw,intr,nosuid,soft,nodev
May  2 16:58:24 core400 automount[10615]: mount_mount: mount(nfs): nfs options="rw,intr,nosuid,soft,nodev", nobind=0, nosymlink=0, ro=0
May  2 16:58:24 core400 automount[10615]: get_nfs_info: called with host core330(192.168.220.118) proto 6 version 0x40
May  2 16:58:24 core400 automount[10615]: get_nfs_info: nfs v4 rpc ping time: 0.001811
May  2 16:58:24 core400 automount[10615]: get_nfs_info: host core330 cost 1811 weight 0
May  2 16:58:24 core400 automount[10615]: get_nfs_info: called with host core330(fd5f:852:a27c:1261:2000::118) proto 6 version 0x40
May  2 16:58:24 core400 automount[10615]: get_nfs_info: nfs v4 rpc ping time: 0.000609
May  2 16:58:24 core400 automount[10615]: get_nfs_info: host core330 cost 608 weight 0
May  2 16:58:24 core400 automount[10615]: get_nfs_info: called with host core330(2001:638:708:1261:2000::118) proto 6 version 0x40
May  2 16:58:24 core400 automount[10615]: get_nfs_info: nfs v4 rpc ping time: 0.000131
May  2 16:58:24 core400 automount[10615]: get_nfs_info: host core330 cost 131 weight 0
May  2 16:58:24 core400 automount[10615]: prune_host_list: selected subset of hosts that support NFS4 over TCP
May  2 16:58:24 core400 automount[10615]: mount_mount: mount(nfs): calling mkdir_path /local/core330
May  2 16:58:24 core400 automount[10615]: mount_mount: mount(nfs): calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev [2001:638:708:1261:2000::118]:/locals /local/core330
May  2 16:58:24 core400 automount[10615]: mount_mount: mount(nfs): mounted [2001:638:708:1261:2000::118]:/locals on /local/core330
May  2 16:58:24 core400 automount[10615]: dev_ioctl_send_ready: token = 9
May  2 16:58:24 core400 automount[10615]: mounted /local/core330
May  2 16:58:26 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:58:26 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:58:26 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:58:26 core400 automount[10615]: 1 remaining in /local
May  2 16:58:26 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:58:26 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:58:26 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:58:30 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:58:30 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:58:30 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:58:30 core400 automount[10615]: 1 remaining in /local
May  2 16:58:30 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:58:30 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:58:30 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:58:34 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:58:34 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:58:34 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:58:34 core400 automount[10615]: 1 remaining in /local
May  2 16:58:34 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:58:34 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:58:34 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:58:38 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:58:38 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:58:38 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:58:38 core400 automount[10615]: 1 remaining in /local
May  2 16:58:38 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:58:38 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:58:38 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:58:42 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:58:42 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:58:42 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:58:42 core400 automount[10615]: handle_packet: type = 4
May  2 16:58:42 core400 automount[10615]: handle_packet_expire_indirect: token 10, name core330
May  2 16:58:42 core400 automount[10615]: expiring path /local/core330
May  2 16:58:42 core400 automount[10615]: umount_multi: path /local/core330 incl 1
May  2 16:58:42 core400 automount[10615]: umount_subtree_mounts: unmounting dir = /local/core330
May  2 16:58:42 core400 automount[10615]: rm_unwanted_fn: removing directory /local/core330
May  2 16:58:42 core400 automount[10615]: expired /local/core330
May  2 16:58:42 core400 automount[10615]: dev_ioctl_send_ready: token = 10
May  2 16:58:43 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 0
May  2 16:58:43 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:58:43 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:58:45 core400 automount[10615]: handle_packet: type = 3
May  2 16:58:45 core400 automount[10615]: handle_packet_missing_indirect: token 11, name core330, request pid 10725
May  2 16:58:45 core400 automount[10615]: attempting to mount entry /local/core330
May  2 16:58:45 core400 automount[10615]: lookup_mount: lookup(program): core330 -> -fstype=nfs4,rw,intr,nosuid,soft,nodev core330:/locals
May  2 16:58:45 core400 automount[10615]: lookup_mount: lookup(program): looking up core330
May  2 16:58:45 core400 automount[10615]: lookup_mount: lookup(program): core330 -> -fstype=nfs4,rw,intr,nosuid,soft,nodev core330:/locals
May  2 16:58:45 core400 automount[10615]: parse_mount: parse(sun): expanded entry: -fstype=nfs4,rw,intr,nosuid,soft,nodev core330:/locals
May  2 16:58:45 core400 automount[10615]: parse_mount: parse(sun): gathered options: fstype=nfs4,rw,intr,nosuid,soft,nodev
May  2 16:58:45 core400 automount[10615]: parse_mount: parse(sun): dequote("core330:/locals") -> core330:/locals
May  2 16:58:45 core400 automount[10615]: parse_mount: parse(sun): core of entry: options=fstype=nfs4,rw,intr,nosuid,soft,nodev, loc=core330:/locals
May  2 16:58:45 core400 automount[10615]: sun_mount: parse(sun): mounting root /local, mountpoint core330, what core330:/locals, fstype nfs4, options rw,intr,nosuid,soft,nodev
May  2 16:58:45 core400 automount[10615]: mount_mount: mount(nfs): root=/local name=core330 what=core330:/locals, fstype=nfs4, options=rw,intr,nosuid,soft,nodev
May  2 16:58:45 core400 automount[10615]: mount_mount: mount(nfs): nfs options="rw,intr,nosuid,soft,nodev", nobind=0, nosymlink=0, ro=0
May  2 16:58:45 core400 automount[10615]: get_nfs_info: called with host core330(192.168.220.118) proto 6 version 0x40
May  2 16:58:45 core400 automount[10615]: get_nfs_info: nfs v4 rpc ping time: 0.000123
May  2 16:58:45 core400 automount[10615]: get_nfs_info: host core330 cost 123 weight 0
May  2 16:58:45 core400 automount[10615]: get_nfs_info: called with host core330(2001:638:708:1261:2000::118) proto 6 version 0x40
May  2 16:58:45 core400 automount[10615]: get_nfs_info: nfs v4 rpc ping time: 0.000140
May  2 16:58:45 core400 automount[10615]: get_nfs_info: host core330 cost 139 weight 0
May  2 16:58:45 core400 automount[10615]: get_nfs_info: called with host core330(fd5f:852:a27c:1261:2000::118) proto 6 version 0x40
May  2 16:58:45 core400 automount[10615]: get_nfs_info: nfs v4 rpc ping time: 0.000148
May  2 16:58:45 core400 automount[10615]: get_nfs_info: host core330 cost 148 weight 0
May  2 16:58:45 core400 automount[10615]: prune_host_list: selected subset of hosts that support NFS4 over TCP
May  2 16:58:45 core400 automount[10615]: mount_mount: mount(nfs): calling mkdir_path /local/core330
May  2 16:58:45 core400 automount[10615]: mount_mount: mount(nfs): calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev 192.168.220.118:/locals /local/core330
May  2 16:58:45 core400 automount[10615]: mount_mount: mount(nfs): mounted 192.168.220.118:/locals on /local/core330
May  2 16:58:45 core400 automount[10615]: dev_ioctl_send_ready: token = 11
May  2 16:58:45 core400 automount[10615]: mounted /local/core330
May  2 16:58:47 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:58:47 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:58:47 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:58:47 core400 automount[10615]: 1 remaining in /local
May  2 16:58:47 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:58:47 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:58:47 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:58:51 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:58:51 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:58:51 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:58:51 core400 automount[10615]: 1 remaining in /local
May  2 16:58:51 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:58:51 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:58:51 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:58:55 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:58:55 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:58:55 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:58:55 core400 automount[10615]: 1 remaining in /local
May  2 16:58:55 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:58:55 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:58:55 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:58:59 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:58:59 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:58:59 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:58:59 core400 automount[10615]: 1 remaining in /local
May  2 16:58:59 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 3
May  2 16:58:59 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:58:59 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local
May  2 16:59:03 core400 automount[10615]: st_expire: state 1 path /local
May  2 16:59:03 core400 automount[10615]: expire_proc: exp_proc = 139681865455360 path /local
May  2 16:59:03 core400 automount[10615]: expire_proc_indirect: expire /local/core330
May  2 16:59:03 core400 automount[10615]: handle_packet: type = 4
May  2 16:59:03 core400 automount[10615]: handle_packet_expire_indirect: token 12, name core330
May  2 16:59:03 core400 automount[10615]: expiring path /local/core330
May  2 16:59:03 core400 automount[10615]: umount_multi: path /local/core330 incl 1
May  2 16:59:03 core400 automount[10615]: umount_subtree_mounts: unmounting dir = /local/core330
May  2 16:59:03 core400 automount[10615]: rm_unwanted_fn: removing directory /local/core330
May  2 16:59:03 core400 automount[10615]: expired /local/core330
May  2 16:59:03 core400 automount[10615]: dev_ioctl_send_ready: token = 12
May  2 16:59:03 core400 automount[10615]: expire_cleanup: got thid 139681865455360 path /local stat 0
May  2 16:59:03 core400 automount[10615]: expire_cleanup: sigchld: exp 139681865455360 finished, switching from 2 to 1
May  2 16:59:03 core400 automount[10615]: st_ready: st_ready(): state = 2 path /local

[-- Attachment #3: log2.txt --]
[-- Type: text/plain, Size: 7447 bytes --]

May  2 17:00:14 core400 autofs[10785]:    ...done.
May  2 17:00:14 core400 systemd[1]: Started LSB: Automounts filesystems on demand.
May  2 17:00:20 core400 automount[10800]: handle_packet: type = 3
May  2 17:00:20 core400 automount[10800]: handle_packet_missing_indirect: token 13, name core330, request pid 10809
May  2 17:00:20 core400 automount[10800]: attempting to mount entry /local/core330
May  2 17:00:20 core400 automount[10800]: lookup_mount: lookup(program): looking up core330
May  2 17:00:20 core400 automount[10800]: lookup_mount: lookup(program): core330 -> -fstype=nfs4,rw,intr,nosuid,soft,nodev [fd5f:852:a27c:1261:2000::118]:/locals
May  2 17:00:20 core400 automount[10800]: parse_mount: parse(sun): expanded entry: -fstype=nfs4,rw,intr,nosuid,soft,nodev [fd5f:852:a27c:1261:2000::118]:/locals
May  2 17:00:20 core400 automount[10800]: parse_mount: parse(sun): gathered options: fstype=nfs4,rw,intr,nosuid,soft,nodev
May  2 17:00:20 core400 automount[10800]: parse_mount: parse(sun): dequote("[fd5f:852:a27c:1261:2000::118]:/locals") -> [fd5f:852:a27c:1261:2000::118]:/locals
May  2 17:00:20 core400 automount[10800]: parse_mount: parse(sun): core of entry: options=fstype=nfs4,rw,intr,nosuid,soft,nodev, loc=[fd5f:852:a27c:1261:2000::118]:/locals
May  2 17:00:20 core400 automount[10800]: sun_mount: parse(sun): mounting root /local, mountpoint core330, what [fd5f:852:a27c:1261:2000::118]:/locals, fstype nfs4, options rw,intr,nosuid,soft,nodev
May  2 17:00:20 core400 automount[10800]: mount_mount: mount(nfs): root=/local name=core330 what=[fd5f:852:a27c:1261:2000::118]:/locals, fstype=nfs4, options=rw,intr,nosuid,soft,nodev
May  2 17:00:20 core400 automount[10800]: mount_mount: mount(nfs): nfs options="rw,intr,nosuid,soft,nodev", nobind=0, nosymlink=0, ro=0
May  2 17:00:20 core400 automount[10800]: get_nfs_info: called with host [fd5f:852:a27c:1261:2000::118](fd5f:852:a27c:1261:2000::118) proto 6 version 0x40
May  2 17:00:20 core400 automount[10800]: get_nfs_info: nfs v4 rpc ping time: 0.000132
May  2 17:00:20 core400 automount[10800]: get_nfs_info: host [fd5f:852:a27c:1261:2000::118] cost 132 weight 0
May  2 17:00:20 core400 automount[10800]: prune_host_list: selected subset of hosts that support NFS4 over TCP
May  2 17:00:20 core400 automount[10800]: mount_mount: mount(nfs): calling mkdir_path /local/core330
May  2 17:00:20 core400 automount[10800]: mount_mount: mount(nfs): calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev [fd5f:852:a27c:1261:2000::118]:/locals /local/core330
May  2 17:00:20 core400 automount[10800]: mount_mount: mount(nfs): mounted [fd5f:852:a27c:1261:2000::118]:/locals on /local/core330
May  2 17:00:20 core400 automount[10800]: dev_ioctl_send_ready: token = 13
May  2 17:00:20 core400 automount[10800]: mounted /local/core330
May  2 17:00:21 core400 automount[10800]: st_expire: state 1 path /local
May  2 17:00:21 core400 automount[10800]: expire_proc: exp_proc = 140215048800000 path /local
May  2 17:00:21 core400 automount[10800]: expire_proc_indirect: expire /local/core330
May  2 17:00:21 core400 automount[10800]: 1 remaining in /local
May  2 17:00:21 core400 automount[10800]: expire_cleanup: got thid 140215048800000 path /local stat 3
May  2 17:00:21 core400 automount[10800]: expire_cleanup: sigchld: exp 140215048800000 finished, switching from 2 to 1
May  2 17:00:21 core400 automount[10800]: st_ready: st_ready(): state = 2 path /local
May  2 17:00:25 core400 automount[10800]: st_expire: state 1 path /local
May  2 17:00:25 core400 automount[10800]: expire_proc: exp_proc = 140215048800000 path /local
May  2 17:00:25 core400 automount[10800]: expire_proc_indirect: expire /local/core330
May  2 17:00:25 core400 automount[10800]: 1 remaining in /local
May  2 17:00:25 core400 automount[10800]: expire_cleanup: got thid 140215048800000 path /local stat 3
May  2 17:00:25 core400 automount[10800]: expire_cleanup: sigchld: exp 140215048800000 finished, switching from 2 to 1
May  2 17:00:25 core400 automount[10800]: st_ready: st_ready(): state = 2 path /local
May  2 17:00:29 core400 automount[10800]: st_expire: state 1 path /local
May  2 17:00:29 core400 automount[10800]: expire_proc: exp_proc = 140215048800000 path /local
May  2 17:00:29 core400 automount[10800]: expire_proc_indirect: expire /local/core330
May  2 17:00:29 core400 automount[10800]: 1 remaining in /local
May  2 17:00:29 core400 automount[10800]: expire_cleanup: got thid 140215048800000 path /local stat 3
May  2 17:00:29 core400 automount[10800]: expire_cleanup: sigchld: exp 140215048800000 finished, switching from 2 to 1
May  2 17:00:29 core400 automount[10800]: st_ready: st_ready(): state = 2 path /local
May  2 17:00:33 core400 automount[10800]: st_expire: state 1 path /local
May  2 17:00:33 core400 automount[10800]: expire_proc: exp_proc = 140215048800000 path /local
May  2 17:00:33 core400 automount[10800]: expire_proc_indirect: expire /local/core330
May  2 17:00:33 core400 automount[10800]: 1 remaining in /local
May  2 17:00:33 core400 automount[10800]: expire_cleanup: got thid 140215048800000 path /local stat 3
May  2 17:00:33 core400 automount[10800]: expire_cleanup: sigchld: exp 140215048800000 finished, switching from 2 to 1
May  2 17:00:33 core400 automount[10800]: st_ready: st_ready(): state = 2 path /local
May  2 17:00:37 core400 automount[10800]: st_expire: state 1 path /local
May  2 17:00:37 core400 automount[10800]: expire_proc: exp_proc = 140215048800000 path /local
May  2 17:00:37 core400 automount[10800]: expire_proc_indirect: expire /local/core330
May  2 17:00:37 core400 automount[10800]: handle_packet: type = 4
May  2 17:00:37 core400 automount[10800]: handle_packet_expire_indirect: token 14, name core330
May  2 17:00:37 core400 automount[10800]: expiring path /local/core330
May  2 17:00:37 core400 automount[10800]: umount_multi: path /local/core330 incl 1
May  2 17:00:37 core400 automount[10800]: umount_subtree_mounts: unmounting dir = /local/core330
May  2 17:00:37 core400 automount[10800]: rm_unwanted_fn: removing directory /local/core330
May  2 17:00:37 core400 automount[10800]: expired /local/core330
May  2 17:00:37 core400 automount[10800]: dev_ioctl_send_ready: token = 14
May  2 17:00:37 core400 automount[10800]: expire_cleanup: got thid 140215048800000 path /local stat 0
May  2 17:00:37 core400 automount[10800]: expire_cleanup: sigchld: exp 140215048800000 finished, switching from 2 to 1
May  2 17:00:37 core400 automount[10800]: st_ready: st_ready(): state = 2 path /local
May  2 17:00:41 core400 automount[10800]: st_expire: state 1 path /local
May  2 17:00:41 core400 automount[10800]: expire_proc: exp_proc = 140215048800000 path /local
May  2 17:00:41 core400 automount[10800]: expire_cleanup: got thid 140215048800000 path /local stat 0
May  2 17:00:41 core400 automount[10800]: expire_cleanup: sigchld: exp 140215048800000 finished, switching from 2 to 1
May  2 17:00:41 core400 automount[10800]: st_ready: st_ready(): state = 2 path /local
May  2 17:00:45 core400 automount[10800]: st_expire: state 1 path /local
May  2 17:00:45 core400 automount[10800]: expire_proc: exp_proc = 140215048800000 path /local
May  2 17:00:45 core400 automount[10800]: expire_cleanup: got thid 140215048800000 path /local stat 0
May  2 17:00:45 core400 automount[10800]: expire_cleanup: sigchld: exp 140215048800000 finished, switching from 2 to 1
May  2 17:00:45 core400 automount[10800]: st_ready: st_ready(): state = 2 path /local

[-- Attachment #4: log3.txt --]
[-- Type: text/plain, Size: 13032 bytes --]

May  2 17:02:04 core400 systemd[1]: Stopped LSB: Automounts filesystems on demand.
May  2 17:02:04 core400 systemd[1]: Starting LSB: Automounts filesystems on demand...
May  2 17:02:04 core400 autofs[10881]:  * Starting automount...
May  2 17:02:04 core400 automount[10895]: Starting automounter version 5.1.1, master map /etc/auto.master
May  2 17:02:04 core400 automount[10895]: using kernel protocol version 5.02
May  2 17:02:04 core400 automount[10895]: lookup_nss_read_master: reading master file /etc/auto.master
May  2 17:02:04 core400 automount[10895]: parse_init: parse(sun): init gathered global options: (null)
May  2 17:02:04 core400 automount[10895]: lookup_read_master: lookup(file): read entry /local
May  2 17:02:04 core400 automount[10895]: master_do_mount: mounting /local
May  2 17:02:04 core400 automount[10895]: automount_path_to_fifo: fifo name /var/run/autofs.fifo-local
May  2 17:02:04 core400 automount[10895]: lookup_nss_read_map: reading map file /etc/auto.local
May  2 17:02:04 core400 automount[10895]: parse_init: parse(sun): init gathered global options: (null)
May  2 17:02:04 core400 automount[10895]: mounted indirect on /local with timeout 15, freq 4 seconds
May  2 17:02:04 core400 automount[10895]: st_ready: st_ready(): state = 0 path /local
May  2 17:02:04 core400 autofs[10881]:    ...done.
May  2 17:02:04 core400 systemd[1]: Started LSB: Automounts filesystems on demand.
May  2 17:02:10 core400 automount[10895]: st_expire: state 1 path /local
May  2 17:02:10 core400 automount[10895]: expire_proc: exp_proc = 140129564624640 path /local
May  2 17:02:10 core400 automount[10895]: expire_cleanup: got thid 140129564624640 path /local stat 0
May  2 17:02:10 core400 automount[10895]: expire_cleanup: sigchld: exp 140129564624640 finished, switching from 2 to 1
May  2 17:02:10 core400 automount[10895]: st_ready: st_ready(): state = 2 path /local
May  2 17:02:14 core400 automount[10895]: st_expire: state 1 path /local
May  2 17:02:14 core400 automount[10895]: expire_proc: exp_proc = 140129564624640 path /local
May  2 17:02:14 core400 automount[10895]: expire_cleanup: got thid 140129564624640 path /local stat 0
May  2 17:02:14 core400 automount[10895]: expire_cleanup: sigchld: exp 140129564624640 finished, switching from 2 to 1
May  2 17:02:14 core400 automount[10895]: st_ready: st_ready(): state = 2 path /local
May  2 17:02:18 core400 automount[10895]: st_expire: state 1 path /local
May  2 17:02:18 core400 automount[10895]: expire_proc: exp_proc = 140129564624640 path /local
May  2 17:02:18 core400 automount[10895]: expire_cleanup: got thid 140129564624640 path /local stat 0
May  2 17:02:18 core400 automount[10895]: expire_cleanup: sigchld: exp 140129564624640 finished, switching from 2 to 1
May  2 17:02:18 core400 automount[10895]: st_ready: st_ready(): state = 2 path /local
May  2 17:02:22 core400 automount[10895]: st_expire: state 1 path /local
May  2 17:02:22 core400 automount[10895]: expire_proc: exp_proc = 140129564624640 path /local
May  2 17:02:22 core400 automount[10895]: expire_cleanup: got thid 140129564624640 path /local stat 0
May  2 17:02:22 core400 automount[10895]: expire_cleanup: sigchld: exp 140129564624640 finished, switching from 2 to 1
May  2 17:02:22 core400 automount[10895]: st_ready: st_ready(): state = 2 path /local
May  2 17:02:26 core400 automount[10895]: st_expire: state 1 path /local
May  2 17:02:26 core400 systemd[1]: Starting Cleanup of Temporary Directories...
May  2 17:02:26 core400 automount[10895]: expire_proc: exp_proc = 140129564624640 path /local
May  2 17:02:26 core400 automount[10895]: expire_cleanup: got thid 140129564624640 path /local stat 0
May  2 17:02:26 core400 automount[10895]: expire_cleanup: sigchld: exp 140129564624640 finished, switching from 2 to 1
May  2 17:02:26 core400 automount[10895]: st_ready: st_ready(): state = 2 path /local
May  2 17:02:26 core400 systemd-tmpfiles[10908]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line for path "/var/log", ignoring.
May  2 17:02:26 core400 systemd[1]: Started Cleanup of Temporary Directories.
May  2 17:02:30 core400 automount[10895]: st_expire: state 1 path /local
May  2 17:02:30 core400 automount[10895]: expire_proc: exp_proc = 140129564624640 path /local
May  2 17:02:30 core400 automount[10895]: expire_cleanup: got thid 140129564624640 path /local stat 0
May  2 17:02:30 core400 automount[10895]: expire_cleanup: sigchld: exp 140129564624640 finished, switching from 2 to 1
May  2 17:02:30 core400 automount[10895]: st_ready: st_ready(): state = 2 path /local
May  2 17:02:32 core400 automount[10895]: handle_packet: type = 3
May  2 17:02:32 core400 automount[10895]: handle_packet_missing_indirect: token 15, name core330, request pid 10914
May  2 17:02:32 core400 automount[10895]: attempting to mount entry /local/core330
May  2 17:02:32 core400 automount[10895]: lookup_mount: lookup(program): looking up core330
May  2 17:02:32 core400 automount[10895]: lookup_mount: lookup(program): core330 -> -fstype=nfs4,rw,intr,nosuid,soft,nodev core330:/locals
May  2 17:02:32 core400 automount[10895]: parse_mount: parse(sun): expanded entry: -fstype=nfs4,rw,intr,nosuid,soft,nodev core330:/locals
May  2 17:02:32 core400 automount[10895]: parse_mount: parse(sun): gathered options: fstype=nfs4,rw,intr,nosuid,soft,nodev
May  2 17:02:32 core400 automount[10895]: parse_mount: parse(sun): dequote("core330:/locals") -> core330:/locals
May  2 17:02:32 core400 automount[10895]: parse_mount: parse(sun): core of entry: options=fstype=nfs4,rw,intr,nosuid,soft,nodev, loc=core330:/locals
May  2 17:02:32 core400 automount[10895]: sun_mount: parse(sun): mounting root /local, mountpoint core330, what core330:/locals, fstype nfs4, options rw,intr,nosuid,soft,nodev
May  2 17:02:32 core400 automount[10895]: mount_mount: mount(nfs): root=/local name=core330 what=core330:/locals, fstype=nfs4, options=rw,intr,nosuid,soft,nodev
May  2 17:02:32 core400 automount[10895]: mount_mount: mount(nfs): nfs options="rw,intr,nosuid,soft,nodev", nobind=0, nosymlink=0, ro=0
May  2 17:02:32 core400 automount[10895]: get_nfs_info: called with host core330(192.168.220.118) proto 6 version 0x40
May  2 17:02:32 core400 automount[10895]: get_nfs_info: nfs v4 rpc ping time: 0.000208
May  2 17:02:32 core400 automount[10895]: get_nfs_info: host core330 cost 207 weight 0
May  2 17:02:32 core400 automount[10895]: get_nfs_info: called with host core330(fd5f:852:a27c:1261:2000::118) proto 6 version 0x40
May  2 17:02:32 core400 automount[10895]: get_nfs_info: nfs v4 rpc ping time: 0.000129
May  2 17:02:32 core400 automount[10895]: get_nfs_info: host core330 cost 128 weight 0
May  2 17:02:32 core400 automount[10895]: get_nfs_info: called with host core330(2001:638:708:1261:2000::118) proto 6 version 0x40
May  2 17:02:32 core400 automount[10895]: get_nfs_info: nfs v4 rpc ping time: 0.000109
May  2 17:02:32 core400 automount[10895]: get_nfs_info: host core330 cost 109 weight 0
May  2 17:02:32 core400 automount[10895]: prune_host_list: selected subset of hosts that support NFS4 over TCP
May  2 17:02:32 core400 automount[10895]: mount_mount: mount(nfs): calling mkdir_path /local/core330
May  2 17:02:32 core400 automount[10895]: mount_mount: mount(nfs): calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev [2001:638:708:1261:2000::118]:/locals /local/core330
May  2 17:02:32 core400 automount[10895]: >> mount.nfs4: access denied by server while mounting [2001:638:708:1261:2000::118]:/locals
May  2 17:02:32 core400 automount[10895]: mount_mount: mount(nfs): calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev [fd5f:852:a27c:1261:2000::118]:/locals /local/core330
May  2 17:02:32 core400 automount[10895]: mount_mount: mount(nfs): mounted [fd5f:852:a27c:1261:2000::118]:/locals on /local/core330
May  2 17:02:32 core400 automount[10895]: dev_ioctl_send_ready: token = 15
May  2 17:02:32 core400 automount[10895]: mounted /local/core330
May  2 17:02:34 core400 automount[10895]: st_expire: state 1 path /local
May  2 17:02:34 core400 automount[10895]: expire_proc: exp_proc = 140129564624640 path /local
May  2 17:02:34 core400 automount[10895]: expire_proc_indirect: expire /local/core330
May  2 17:02:34 core400 automount[10895]: 1 remaining in /local
May  2 17:02:34 core400 automount[10895]: expire_cleanup: got thid 140129564624640 path /local stat 3
May  2 17:02:34 core400 automount[10895]: expire_cleanup: sigchld: exp 140129564624640 finished, switching from 2 to 1
May  2 17:02:34 core400 automount[10895]: st_ready: st_ready(): state = 2 path /local
May  2 17:02:38 core400 automount[10895]: st_expire: state 1 path /local
May  2 17:02:38 core400 automount[10895]: expire_proc: exp_proc = 140129564624640 path /local
May  2 17:02:38 core400 automount[10895]: expire_proc_indirect: expire /local/core330
May  2 17:02:38 core400 automount[10895]: 1 remaining in /local
May  2 17:02:38 core400 automount[10895]: expire_cleanup: got thid 140129564624640 path /local stat 3
May  2 17:02:38 core400 automount[10895]: expire_cleanup: sigchld: exp 140129564624640 finished, switching from 2 to 1
May  2 17:02:38 core400 automount[10895]: st_ready: st_ready(): state = 2 path /local
May  2 17:02:42 core400 automount[10895]: st_expire: state 1 path /local
May  2 17:02:42 core400 automount[10895]: expire_proc: exp_proc = 140129564624640 path /local
May  2 17:02:42 core400 automount[10895]: expire_proc_indirect: expire /local/core330
May  2 17:02:42 core400 automount[10895]: 1 remaining in /local
May  2 17:02:42 core400 automount[10895]: expire_cleanup: got thid 140129564624640 path /local stat 3
May  2 17:02:42 core400 automount[10895]: expire_cleanup: sigchld: exp 140129564624640 finished, switching from 2 to 1
May  2 17:02:42 core400 automount[10895]: st_ready: st_ready(): state = 2 path /local
May  2 17:02:46 core400 automount[10895]: st_expire: state 1 path /local
May  2 17:02:46 core400 automount[10895]: expire_proc: exp_proc = 140129564624640 path /local
May  2 17:02:46 core400 automount[10895]: expire_proc_indirect: expire /local/core330
May  2 17:02:46 core400 automount[10895]: 1 remaining in /local
May  2 17:02:46 core400 automount[10895]: expire_cleanup: got thid 140129564624640 path /local stat 3
May  2 17:02:46 core400 automount[10895]: expire_cleanup: sigchld: exp 140129564624640 finished, switching from 2 to 1
May  2 17:02:46 core400 automount[10895]: st_ready: st_ready(): state = 2 path /local
May  2 17:02:50 core400 automount[10895]: st_expire: state 1 path /local
May  2 17:02:50 core400 automount[10895]: expire_proc: exp_proc = 140129564624640 path /local
May  2 17:02:50 core400 automount[10895]: expire_proc_indirect: expire /local/core330
May  2 17:02:50 core400 automount[10895]: handle_packet: type = 4
May  2 17:02:50 core400 automount[10895]: handle_packet_expire_indirect: token 16, name core330
May  2 17:02:50 core400 automount[10895]: expiring path /local/core330
May  2 17:02:50 core400 automount[10895]: umount_multi: path /local/core330 incl 1
May  2 17:02:50 core400 automount[10895]: umount_subtree_mounts: unmounting dir = /local/core330
May  2 17:02:50 core400 automount[10895]: rm_unwanted_fn: removing directory /local/core330
May  2 17:02:50 core400 automount[10895]: expired /local/core330
May  2 17:02:50 core400 automount[10895]: dev_ioctl_send_ready: token = 16
May  2 17:02:50 core400 automount[10895]: expire_cleanup: got thid 140129564624640 path /local stat 0
May  2 17:02:50 core400 automount[10895]: expire_cleanup: sigchld: exp 140129564624640 finished, switching from 2 to 1
May  2 17:02:50 core400 automount[10895]: st_ready: st_ready(): state = 2 path /local
May  2 17:02:54 core400 automount[10895]: st_expire: state 1 path /local
May  2 17:02:54 core400 automount[10895]: expire_proc: exp_proc = 140129564624640 path /local
May  2 17:02:54 core400 automount[10895]: expire_cleanup: got thid 140129564624640 path /local stat 0
May  2 17:02:54 core400 automount[10895]: expire_cleanup: sigchld: exp 140129564624640 finished, switching from 2 to 1
May  2 17:02:54 core400 automount[10895]: st_ready: st_ready(): state = 2 path /local
May  2 17:02:58 core400 automount[10895]: st_expire: state 1 path /local
May  2 17:02:58 core400 automount[10895]: expire_proc: exp_proc = 140129564624640 path /local
May  2 17:02:58 core400 automount[10895]: expire_cleanup: got thid 140129564624640 path /local stat 0
May  2 17:02:58 core400 automount[10895]: expire_cleanup: sigchld: exp 140129564624640 finished, switching from 2 to 1
May  2 17:02:58 core400 automount[10895]: st_ready: st_ready(): state = 2 path /local
May  2 17:03:02 core400 automount[10895]: st_expire: state 1 path /local
May  2 17:03:02 core400 automount[10895]: expire_proc: exp_proc = 140129564624640 path /local
May  2 17:03:02 core400 automount[10895]: expire_cleanup: got thid 140129564624640 path /local stat 0
May  2 17:03:02 core400 automount[10895]: expire_cleanup: sigchld: exp 140129564624640 finished, switching from 2 to 1
May  2 17:03:02 core400 automount[10895]: st_ready: st_ready(): state = 2 path /local

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-05-02 16:08                                             ` Christof Koehler
@ 2016-05-03  7:58                                               ` Ian Kent
  2016-05-03 15:13                                                 ` Christof Koehler
  0 siblings, 1 reply; 49+ messages in thread
From: Ian Kent @ 2016-05-03  7:58 UTC (permalink / raw)
  To: christof.koehler; +Cc: autofs

On Mon, 2016-05-02 at 18:08 +0200, Christof Koehler wrote:
> Hello,
> 
> I did three tests now, each with my self-built packages and the ones
> from tour ppa:
> 
> 1. multiple IPv6 addresses
> 2. square bracketed IPv6 address from map
> 3. new situation, "failover": the server has several IPv6 and one IPv4
>    address, but the server exports only to one IPv6 address
> 
> There does not appear to be any difference depending on the package
> versions
> used. Log output for each using the packages from your ppa is
> attached.

OK, I'll have a look at those.

> 
> In test 1 one address is selected based on response time, in tests 2
> there is a successfull mount using the specified address and in the
> new
> test 3 autofs starts with the address with the lowest response time
> but
> once it fails it proceeds trying the others. No user noticeable
> delays.
> 
> Especially this failover behaviour is very nice. Effectively this
> makes
> steering the address selection towards what I want from a client side
> into a server side problem ! I can simply export just to ULAs and
> autofs
> figures this out for me. Is that correct ? If so, this is all I ever
> wanted, I can put simply a readable hostname in the map and use the
> server's /etc/exports to fix the address. Sorry for not making the
> connection 
> earlier.

LOL, I didn't get it either.

The proximity and availability selection code came about originally
because Linux NFS doesn't support read-only replicated NFS mounts.
Selecting the best mount I can at mount time was all I could do.

But it has grown into an essential method of establishing if a target
server is available and what protocols are offered in a more suitable
time frame (than just trying to mount and handling error returns) for
interactive use.

So, yes, what your seeing is what it's supposed to do, ;)

Certainly, controlling what services are available for which address
only in the exports seems like the most sensible way to do what you
need, particularly for a maintenance POV.

I must admit I was a little concerned with what it sounded like you
needed to do. It seemed like there were going to be too many places were
things would need to be set properly and I thought that would quickly
become a burden.

So, hopefully, the autofs probing code will provide what you need.

> 
> On Mon, May 02, 2016 at 02:01:42PM +0800, Ian Kent wrote:
> > On Sat, 2016-04-30 at 13:36 +0200, Christof Koehler wrote:
> > 
> > I'll update the man pages to talk about libtirpc.
> > In particular I'll say that libtirpc is required for IPv6.
> Thank you. Eventually this will also alert distro maintainers that
> their
> build should change, autofs with the same libtirpc requirement than
> nfs-common. May be it goes into debian testing in time for
> ubuntu 18.04 LTS :-)
> 
> But for now rebuilding appears to work anyway. I should update the bug
> report in the ubuntu tracker.
> 
> > What I didn't see is trying different addresses over retry on
> > failure
> > but mount.nfs(8) has never claimed to do that.
> 
> And on top the failover feature in autofs does what I wanted in the
> first place. I only was not conciously aware of it.
> 
> > I also saw a few mailing list posts that implied the selection had
> > been
> > modified over time and there was mention of rfc 6724 in some posts.
> I belive the difference is primarily that some address classes were
> deprecated in the real world between, see
> https://tools.ietf.org/html/rfc6724#appendix-B
> 
> > 
> > I can help with that if there are any problems.
> 
> Works apparently now with test 2 and 3 above. So no changes needed if
> 2
> and 3 are the desired behaviour from your side ! Thank you for your
> patience.
> 
> > 
> > I don't think I require square brackets on IPv6 addresses but they
> > shouldn't cause the mount to fail either, if they are present.
> Well, apparently the string from the map is passed verbatim to mount.
> Without brackets mount is unhappy if I remember my own testing
> correctly, 
> confirm also nfs(5). According to nfs(5) mount needs them.
> 
> autofs and mount are both client side programs. So requiring square
> brackets appears to be a reasonable convention. /etc/exports is server
> side 
> and does not allow square brackets. I could live with that.

I thought I added the square brackets for some operations, I thought
mount was one, I'll need to look into that.

I'm not sure I will be able to do it but ideally I'd accept either.

But I suspect I'll probably need to require brackets, the sun format
maps already have an ambiguous syntax which makes writing a parser
generator definition really hard so not adding to that by introducing
further ambiguity in IPv6 addresses is probably best.

> 
> > 
> > It's a little hard to work out actually because the parsing code for
> > sun
> > format mount maps (but not the master map) is in several different
> > locations (a background task I have is to change this to a YACC
> > parser
> > and do almost all the parsing in one place).
> That I do not know. I see the problem.
> 
> But may be related to this:
> We are still using NIS and actually we had the autofs map in  NIS
> for years. 
> If you want I can try what happens if I put an IPv6 address into one
> of
> our nis maps (reactivate one of the autofs maps). The server is a
> debian 
> oldstable.
> 
> I removed the autofs maps from nis because I am not sure if NIS is 
> supported over IPv6 and so might not be a viable option if we have
> IPv6
> only machines as mentioned earlier. I am looking at openldap/kerberos 
> every few months trying to convince myselves that moving to it is
> worth 
> the effort compared to keeping everything in ansible managed local
> files.
> 
> There is http://www.linux-nis.org/nis-ipv6/, but a
> ldd /usr/sbin/ypbind|grep rpc comes up empty.

Actually I hadn't thought about NIS and IPv6 and I haven't seen any
discussion about it so it probably isn't ok with IPv6.

The autofs LDAP code is complicated, it's difficult to work with.

Just recently I had a very difficult problem that appeared to be caused
by an OpenLDAP build that autofs having changing from OpenSSL to NSS
(and related libraries).

There appeared to be thread safety problems with that code so all I
could do is widen the mutual exclusion regions to get around it as well
as some other not so simple changes for other problems.

Don't get me wrong, during the the investigation I found a number of
problems with the autofs code too, so I can't point the finger at others
for code that isn't quite right either, ;)

Anyway, for my part, if you have a system that your site is comfortable
with that suits your needs, and you don't have a compelling reason to
move to LDAP, I'd recommend holding back from moving to it.

> 
> > The list is constructed on the fly at mount time and those that
> > don't
> > respond are removed leaving only hosts (or addresses) that need a
> > mount
> > to be attempted.
> That is test 3 above, right ? Appears to work fine. Thank you for
> bringing this up, I got the idea then.

LOL, great.

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-05-03  7:58                                               ` Ian Kent
@ 2016-05-03 15:13                                                 ` Christof Koehler
  2016-05-04  7:20                                                   ` Ian Kent
  0 siblings, 1 reply; 49+ messages in thread
From: Christof Koehler @ 2016-05-03 15:13 UTC (permalink / raw)
  To: Ian Kent; +Cc: autofs

Hello,

On Tue, May 03, 2016 at 03:58:38PM +0800, Ian Kent wrote:
> On Mon, 2016-05-02 at 18:08 +0200, Christof Koehler wrote:
> > We are still using NIS and actually we had the autofs map in  NIS
> > for years. 
> 
> Actually I hadn't thought about NIS and IPv6 and I haven't seen any
> discussion about it so it probably isn't ok with IPv6.
Still, if you need any tests ...

> 
> The autofs LDAP code is complicated, it's difficult to work with.
> 
> Anyway, for my part, if you have a system that your site is comfortable
> with that suits your needs, and you don't have a compelling reason to
> move to LDAP, I'd recommend holding back from moving to it.

I believe that, especially considering that there are three (?) different
schemas for storing them. That's sufficient to convince me to not store our 
non-executable maps in LDAP (if I ever move to LDAP). I can manage them
fine using ansible I believe.


Best Regards

Christof

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-05-03 15:13                                                 ` Christof Koehler
@ 2016-05-04  7:20                                                   ` Ian Kent
  2016-05-04 12:38                                                     ` Christof Koehler
  0 siblings, 1 reply; 49+ messages in thread
From: Ian Kent @ 2016-05-04  7:20 UTC (permalink / raw)
  To: christof.koehler; +Cc: autofs

On Tue, 2016-05-03 at 17:13 +0200, Christof Koehler wrote:
> Hello,
> 
> On Tue, May 03, 2016 at 03:58:38PM +0800, Ian Kent wrote:
> > On Mon, 2016-05-02 at 18:08 +0200, Christof Koehler wrote:
> > > We are still using NIS and actually we had the autofs map in  NIS
> > > for years. 
> > 
> > Actually I hadn't thought about NIS and IPv6 and I haven't seen any
> > discussion about it so it probably isn't ok with IPv6.
> Still, if you need any tests ...

Right, thanks.

> 
> > 
> > The autofs LDAP code is complicated, it's difficult to work with.
> > 
> > Anyway, for my part, if you have a system that your site is
> > comfortable
> > with that suits your needs, and you don't have a compelling reason
> > to
> > move to LDAP, I'd recommend holding back from moving to it.
> 
> I believe that, especially considering that there are three (?)
> different
> schemas for storing them. That's sufficient to convince me to not
> store our 
> non-executable maps in LDAP (if I ever move to LDAP). I can manage
> them
> fine using ansible I believe.

There's really only two when it comes down to it.

I get confused about the rfc designations for these.

I think rfc2307 is what I have called the NIS schema and rfc2307bis is
what I usually call, well, rfc2307bis.

The rfc 2307 rfc is a subset of rfc 2307bis so they are really just one
schema. The bis schema adds automount* attributes to the existing nis*
attributes.

One might use the nis* attribute names if coming from a NIS environment,
especially if there were conversion utilities available.

But the bis schema is preferred IMHO.

The main advantage of the bis schema is the automountKey attribute is
case sensitive, the NIS schema uses the cn attribute which IIRC is not
case sensitive.

There is another schema which I think have mistakenly called rfc 2307 in
the past but I think that one uses incorrect OID assignments so it
shouldn't be used.

Ian
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: autofs reverts to IPv4 for multi-homed IPv6 server ?
  2016-05-04  7:20                                                   ` Ian Kent
@ 2016-05-04 12:38                                                     ` Christof Koehler
  0 siblings, 0 replies; 49+ messages in thread
From: Christof Koehler @ 2016-05-04 12:38 UTC (permalink / raw)
  To: Ian Kent; +Cc: autofs

Hello,

there was one more thing I promised to test: mounting from a nfs server which only 
has IPv6 addresses. I did this now.

The server is a centos 7 with minimal configuration. There is no IPv4
configured and IPv6 is configured with a GUA and an ULA. Of course
matching forward/reverse DNS entries. Using an autofs built with
libtirpc (either my rebuild or the packages from your ppa) there are no
client side problems, e.g. it can mount fine using one of the IPv6
addresses after the response time check.

May  4 13:50:46 core400 automount[9642]: mount_mount: mount(nfs):
calling mount -t nfs4 -s -o rw,intr,nosuid,soft,nodev
[2001:638:708:1261:1000::9]:/mnt/export /local/testserver
May  4 13:50:46 core400 automount[9642]: mount_mount: mount(nfs):
mounted [2001:638:708:1261:1000::9]:/mnt/export on /local/testserver

With the autofs package from ubuntu, i.e. without libtirpc, it does not
work at all as you expected.

May  4 13:53:07 core400 automount[10949]: mount_mount: mount(nfs):
root=/local name=testserver what=testserver:/mnt/export, fstype=nfs4,
options=rw,intr,nosuid,soft,nodev
May  4 13:53:07 core400 automount[10949]: mount_mount: mount(nfs): nfs
options="rw,intr,nosuid,soft,nodev", nobind=0, nosymlink=0, ro=0
May  4 13:53:07 core400 automount[10949]: mount(nfs): no hosts available

I will have to check the server side again, apparently with
just "core*.bccms.uni-bremen.de" in /etc/exports either it does not
listen on all IPv6 addresses or it does only export to one of them (DNS
lookup order ?). But this is something I also observed on our
debian/ubuntu servers with a similar configuration. No real problems, I
can export to the ULAs directly if I want (or let autofs figure it out
for me :-).

On Wed, May 04, 2016 at 03:20:53PM +0800, Ian Kent wrote:
> On Tue, 2016-05-03 at 17:13 +0200, Christof Koehler wrote:
> 
> But the bis schema is preferred IMHO.
> 
I will take a second look at this then. Thank you !


Best Regards

Christof

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@bccms.uni-bremen.de
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in

^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2016-05-04 12:38 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-07 14:19 autofs reverts to IPv4 for multi-homed IPv6 server ? Christof Koehler
2016-04-08  4:46 ` Ian Kent
2016-04-08 10:10   ` Ian Kent
2016-04-08 10:14     ` Ian Kent
2016-04-08 12:25     ` Christof Koehler
2016-04-08 14:29       ` Christof Koehler
2016-04-08 15:32         ` Christof Koehler
2016-04-10  2:09           ` Ian Kent
2016-04-08 16:12         ` Christof Koehler
2016-04-08 16:15           ` Christof Koehler
2016-04-10  2:17             ` Ian Kent
2016-04-10  2:14           ` Ian Kent
2016-04-09  1:42         ` Ian Kent
2016-04-09  9:56           ` Christof Koehler
2016-04-10  2:29             ` Ian Kent
2016-04-25  4:40             ` Ian Kent
2016-04-25 15:06               ` Christof Koehler
2016-04-26  1:06                 ` Ian Kent
2016-04-26  9:53                   ` Ian Kent
2016-04-26 15:27                     ` Christof Koehler
2016-04-27  1:54                       ` Ian Kent
2016-04-27  2:27                         ` Ian Kent
2016-04-27 16:52                         ` Christof Koehler
2016-04-28  2:56                           ` Ian Kent
2016-04-28  3:21                             ` Ian Kent
2016-04-28  9:12                               ` Christof Koehler
2016-04-28  9:10                             ` Christof Koehler
2016-04-28 10:50                               ` Ian Kent
2016-04-28 11:26                                 ` Christof Koehler
2016-04-28 12:40                                   ` Christof Koehler
2016-04-29  1:54                                   ` Ian Kent
2016-04-29 14:10                                     ` Christof Koehler
2016-04-29 14:42                                       ` Christof Koehler
2016-04-30  3:21                                       ` Ian Kent
2016-04-30 11:36                                         ` Christof Koehler
2016-04-30 15:15                                           ` Christof Koehler
2016-04-30 15:16                                           ` Christof Koehler
2016-05-02  6:01                                           ` Ian Kent
2016-05-02 16:08                                             ` Christof Koehler
2016-05-03  7:58                                               ` Ian Kent
2016-05-03 15:13                                                 ` Christof Koehler
2016-05-04  7:20                                                   ` Ian Kent
2016-05-04 12:38                                                     ` Christof Koehler
2016-04-09  1:35       ` Ian Kent
2016-04-11  2:42     ` Ian Kent
2016-04-11 16:32       ` Christof Koehler
2016-04-11 16:35         ` Christof Koehler
2016-04-12  1:07           ` Ian Kent
2016-04-08 11:47   ` Christof Koehler

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.