selinux.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* semodule -i and load_policy coredumps on version 3.0 - not latest GIT
@ 2020-04-14  0:29 Russell Coker
  2020-04-14 17:27 ` Nicolas Iooss
  0 siblings, 1 reply; 5+ messages in thread
From: Russell Coker @ 2020-04-14  0:29 UTC (permalink / raw)
  To: selinux, bigon

I'm getting core dumps from inserting modules, I can repeatedly run semodule 
with the same module and have it crash some times and not others.  But it 
crashes more often if I have 2 slightly different modules of the same name and 
alternate between inserting them.

while semodule -i pol/toadd.pp && sleep 8 && semodule -i pol2/toadd.pp && 
sleep 8 ; do date ; done

The above shell command is pretty good at causing SEGVs.

This happens regularly with libsepol version 3.0 (which is in Debian/
Unstable), so far I have not reproduced it with the latest git version of 
libsepol.  While I'm not certain the bug is fixed in the latest git version, I 
think it's very likely to be fixed (I'll have to run tests for another couple 
of days to be convinced).  Have libsepol developers knowingly fixed such a bug?

Here's coredumpctl output from semodule (at the time libsepol wasn't compiled 
with debugging symbols):

Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/sbin/semodule -i toadd.pp'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  __strlen_sse2 () at ../sysdeps/x86_64/multiarch/../strlen.S:120
120     ../sysdeps/x86_64/multiarch/../strlen.S: No such file or directory.
(gdb) bt
#0  __strlen_sse2 () at ../sysdeps/x86_64/multiarch/../strlen.S:120
#1  0x00007ff2128cf756 in __vfprintf_internal (s=s@entry=0x7ffecc31daa0, 
    format=format@entry=0x7ff212af88f9 "Error: Unknown keyword %s\n", 
    ap=ap@entry=0x7ffecc31de40, mode_flags=mode_flags@entry=2)
    at vfprintf-internal.c:1688
#2  0x00007ff2128e11f6 in __vsnprintf_internal (
    string=0x7ffecc31dc20 "Error: Unknown keyword ", maxlen=<optimized out>, 
    format=0x7ff212af88f9 "Error: Unknown keyword %s\n", args=0x7ffecc31de40, 
    mode_flags=2) at vsnprintf.c:114

Here's one from load_policy which I believe is related.  Running semodule -i 
repeatedly on the same file doesn't seem to cause a problem (I've had a loop of 
that run for hours without a SEGV) but it happened quickly when alternately 
loading 2 slightly different files.

  Command Line: /sbin/load_policy
    Executable: /usr/sbin/load_policy
       Boot ID: 8727799a8e0b44f1885f1b4c681efea9
    Machine ID: 384a085cdf4a437cae153168e34245f4
      Hostname: play
       Storage: /var/lib/systemd/coredump/core.load_policy.
0.8727799a8e0b44f188>
       Message: Process 70655 (load_policy) of user 0 dumped core.
                
                Stack trace of thread 70655:
                #0  0x00007f0716a6685d ebitmap_destroy (libsepol.so.1 + 
0x1185d)
                #1  0x00007f0716a635eb constraint_expr_destroy (libsepol.so.1 
+>
                #2  0x00007f0716aa7d71 class_destroy (libsepol.so.1 + 0x52d71)
                #3  0x00007f0716a73893 hashtab_map (libsepol.so.1 + 0x1e893)
                #4  0x00007f0716aa86b6 symtabs_destroy (libsepol.so.1 + 
0x536b6)
                #5  0x00007f0716aa822b policydb_destroy (libsepol.so.1 + 
0x5322>
                #6  0x00007f0716ab091a policydb_to_image (libsepol.so.1 + 
0x5b9>
                #7  0x00007f0716ab0e08 sepol_policydb_to_image (libsepol.so.1 
+>
                #8  0x00007f0716a3eadc selinux_mkload_policy (libselinux.so.1 
+>
                #9  0x00005560e76d12bf n/a (load_policy + 0x12bf)
                #10 0x00007f071688de0b __libc_start_main (libc.so.6 + 0x26e0b)
                #11 0x00005560e76d134a n/a (load_policy + 0x134a)

Here's one from semodule -i:

  Command Line: semodule -i pol2/toadd.pp
    Executable: /usr/sbin/semodule
       Boot ID: 8727799a8e0b44f1885f1b4c681efea9
    Machine ID: 384a085cdf4a437cae153168e34245f4
      Hostname: play
       Storage: /var/lib/systemd/coredump/core.semodule.
0.8727799a8e0b44f1885f1>
       Message: Process 92165 (semodule) of user 0 dumped core.
                
                Stack trace of thread 92165:
                #0  0x00007ff72cde6d9d __cil_build_ast_node_helper 
(libsepol.so>
                #1  0x00007ff72ce08721 cil_tree_walk_core (libsepol.so.1 + 
0xaf>
                #2  0x00007ff72ce08884 cil_tree_walk (libsepol.so.1 + 0xaf884)
                #3  0x00007ff72ce08793 cil_tree_walk_core (libsepol.so.1 + 
0xaf>
                #4  0x00007ff72ce08884 cil_tree_walk (libsepol.so.1 + 0xaf884)
                #5  0x00007ff72cde8bdf cil_build_ast (libsepol.so.1 + 0x8fbdf)
                #6  0x00007ff72cdc9c25 cil_compile_nopdb (libsepol.so.1 + 
0x70c>
                #7  0x00007ff72cd2d9b9 n/a (libsemanage.so.1 + 0x169b9)
                #8  0x00007ff72cd32e2e semanage_commit (libsemanage.so.1 + 
0x1b>
                #9  0x000055caa80921f4 n/a (semodule + 0x31f4)


  Command Line: semodule -i pol/toadd.pp
    Executable: /usr/sbin/semodule
       Boot ID: 8727799a8e0b44f1885f1b4c681efea9
    Machine ID: 384a085cdf4a437cae153168e34245f4
      Hostname: play
       Storage: /var/lib/systemd/coredump/core.semodule.
0.8727799a8e0b44f1885f1b4c681efea9.97315.1586589967000000000000.lz4
       Message: Process 97315 (semodule) of user 0 dumped core.
                
                Stack trace of thread 97315:
                #0  0x00007fb79a99897e cil_list_destroy (libsepol.so.1 + 
0x9797e)
                #1  0x00007fb79a9a5339 cil_reset_classperms (libsepol.so.1 + 
0xa4339)
                #2  0x00007fb79a9a53c1 cil_reset_classperms_list (libsepol.so.
1 + 0xa43c1)
                #3  0x00007fb79a9a574d cil_reset_avrule (libsepol.so.1 + 
0xa474d)
                #4  0x00007fb79a9a5ed5 __cil_reset_node (libsepol.so.1 + 
0xa4ed5)
                #5  0x00007fb79a9b0721 cil_tree_walk_core (libsepol.so.1 + 
0xaf721)
                #6  0x00007fb79a9b0884 cil_tree_walk (libsepol.so.1 + 0xaf884)
                #7  0x00007fb79a9b0793 cil_tree_walk_core (libsepol.so.1 + 
0xaf793)
                #8  0x00007fb79a9b0884 cil_tree_walk (libsepol.so.1 + 0xaf884)
                #9  0x00007fb79a9b0793 cil_tree_walk_core (libsepol.so.1 + 
0xaf793)
                #10 0x00007fb79a9b0884 cil_tree_walk (libsepol.so.1 + 0xaf884)
                #11 0x00007fb79a9b0793 cil_tree_walk_core (libsepol.so.1 + 
0xaf793)
                #12 0x00007fb79a9b0884 cil_tree_walk (libsepol.so.1 + 0xaf884)
                #13 0x00007fb79a9b0793 cil_tree_walk_core (libsepol.so.1 + 
0xaf793)
                #14 0x00007fb79a9b0884 cil_tree_walk (libsepol.so.1 + 0xaf884)
                #15 0x00007fb79a9b0793 cil_tree_walk_core (libsepol.so.1 + 
0xaf793)
                #16 0x00007fb79a9b0884 cil_tree_walk (libsepol.so.1 + 0xaf884)
                #17 0x00007fb79a9b0793 cil_tree_walk_core (libsepol.so.1 + 
0xaf793)
                #18 0x00007fb79a9b0884 cil_tree_walk (libsepol.so.1 + 0xaf884)
                #19 0x00007fb79a9b0793 cil_tree_walk_core (libsepol.so.1 + 
0xaf793)
                #20 0x00007fb79a9b0884 cil_tree_walk (libsepol.so.1 + 0xaf884)
                #21 0x00007fb79a9a6137 cil_reset_ast (libsepol.so.1 + 0xa5137)
                #22 0x00007fb79a9ae84f cil_resolve_ast (libsepol.so.1 + 
0xad84f)
                #23 0x00007fb79a971c9b cil_compile_nopdb (libsepol.so.1 + 
0x70c9b)
                #24 0x00007fb79a8d59b9 n/a (libsemanage.so.1 + 0x169b9)
                #25 0x00007fb79a8dae2e semanage_commit (libsemanage.so.1 + 
0x1be2e)
                #26 0x0000565277c421f4 n/a (semodule + 0x31f4)
                #27 0x00007fb79a722e0b __libc_start_main (libc.so.6 + 0x26e0b)
                #28 0x0000565277c4271a n/a (semodule + 0x371a)


  Command Line: semodule -i pol/toadd.pp
    Executable: /usr/sbin/semodule
       Boot ID: 587ecc120f8d44d38475a9fa1e067f66
    Machine ID: 384a085cdf4a437cae153168e34245f4
      Hostname: play
       Storage: /var/lib/systemd/coredump/core.semodule.
0.587ecc120f8d44d38475a>
       Message: Process 17359 (semodule) of user 0 dumped core.
                
                Stack trace of thread 17359:
                #0  0x00007f4aa083c956 cil_list_destroy (libsepol.so.1 + 
0x9795>
                #1  0x00007f4aa0827e4e cil_destroy_classperms (libsepol.so.1 + 
>
                #2  0x00007f4aa0828082 cil_destroy_classperms_list 
(libsepol.so>
                #3  0x00007f4aa082a923 cil_destroy_avrule (libsepol.so.1 + 
0x85>
                #4  0x00007f4aa0816438 cil_destroy_data (libsepol.so.1 + 
0x7143>
                #5  0x00007f4aa08546a4 cil_tree_node_destroy (libsepol.so.1 + 
0>
                #6  0x00007f4aa085455c cil_tree_children_destroy (libsepol.so.
1>
                #7  0x00007f4aa0854491 cil_tree_subtree_destroy (libsepol.so.1 
>
                #8  0x00007f4aa085445a cil_tree_destroy (libsepol.so.1 + 
0xaf45>
                #9  0x00007f4aa0815842 cil_db_destroy (libsepol.so.1 + 
0x70842)
                #10 0x00007f4aa0779b47 n/a (libsemanage.so.1 + 0x16b47)
                #11 0x00007f4aa077ee2e semanage_commit (libsemanage.so.1 + 
0x1b>
                #12 0x0000561cbbe3c1f4 n/a (semodule + 0x31f4)
                #13 0x00007f4aa05c6e0b __libc_start_main (libc.so.6 + 0x26e0b)
                #14 0x0000561cbbe3c71a n/a (semodule + 0x371a)

Here's one of the smaller entries in my collection of valgrind outputs from 
semodule having problems.  They all appear to be uninitialised memory.

Memcheck, a memory error detector
Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info
Command: semodule -i pol/toadd.pp

Conditional jump or move depends on uninitialised value(s)
   at 0x48FA768: cil_tree_walk_core (cil_tree.c:283)
   by 0x48FA883: cil_tree_walk (cil_tree.c:316)
   by 0x48FA792: cil_tree_walk_core (cil_tree.c:284)
   by 0x48FA883: cil_tree_walk (cil_tree.c:316)
   by 0x48F8510: cil_resolve_ast (cil_resolve_ast.c:3928)
   by 0x48BBC9A: cil_compile@@LIBSEPOL_1.1 (cil.c:571)
   by 0x49489B8: ??? (in /usr/lib/x86_64-linux-gnu/libsemanage.so.1)
   by 0x494DE2D: semanage_commit (in /usr/lib/x86_64-linux-gnu/libsemanage.so.
1)
   by 0x10B1F3: ??? (in /usr/sbin/semodule)
   by 0x499AE0A: (below main) (libc-start.c:308)

Conditional jump or move depends on uninitialised value(s)
   at 0x48FA4F3: cil_tree_children_destroy (cil_tree.c:190)
   by 0x48FA490: cil_tree_subtree_destroy (cil_tree.c:172)
   by 0x48FA459: cil_tree_destroy (cil_tree.c:165)
   by 0x48BB841: cil_db_destroy (cil.c:470)
   by 0x4948B46: ??? (in /usr/lib/x86_64-linux-gnu/libsemanage.so.1)
   by 0x494DE2D: semanage_commit (in /usr/lib/x86_64-linux-gnu/libsemanage.so.
1)
   by 0x10B1F3: ??? (in /usr/sbin/semodule)
   by 0x499AE0A: (below main) (libc-start.c:308)


HEAP SUMMARY:
    in use at exit: 5,595 bytes in 114 blocks
  total heap usage: 14,200,981 allocs, 14,200,867 frees, 2,058,178,500 bytes 
allocated

LEAK SUMMARY:
   definitely lost: 0 bytes in 0 blocks
   indirectly lost: 0 bytes in 0 blocks
     possibly lost: 0 bytes in 0 blocks
   still reachable: 5,595 bytes in 114 blocks
        suppressed: 0 bytes in 0 blocks
Rerun with --leak-check=full to see details of leaked memory

For lists of detected and suppressed errors, rerun with: -s
ERROR SUMMARY: 54 errors from 2 contexts (suppressed: 0 from 0)


-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: semodule -i and load_policy coredumps on version 3.0 - not latest GIT
  2020-04-14  0:29 semodule -i and load_policy coredumps on version 3.0 - not latest GIT Russell Coker
@ 2020-04-14 17:27 ` Nicolas Iooss
  2020-04-15 17:17   ` Russell Coker
  2020-04-21  4:01   ` Russell Coker
  0 siblings, 2 replies; 5+ messages in thread
From: Nicolas Iooss @ 2020-04-14 17:27 UTC (permalink / raw)
  To: Russell Coker; +Cc: SElinux list, Laurent Bigonville

On Tue, Apr 14, 2020 at 2:29 AM Russell Coker <russell@coker.com.au> wrote:
>
> I'm getting core dumps from inserting modules, I can repeatedly run semodule
> with the same module and have it crash some times and not others.  But it
> crashes more often if I have 2 slightly different modules of the same name and
> alternate between inserting them.
>
> while semodule -i pol/toadd.pp && sleep 8 && semodule -i pol2/toadd.pp &&
> sleep 8 ; do date ; done
>
> The above shell command is pretty good at causing SEGVs.
>
> This happens regularly with libsepol version 3.0 (which is in Debian/
> Unstable), so far I have not reproduced it with the latest git version of
> libsepol.  While I'm not certain the bug is fixed in the latest git version, I
> think it's very likely to be fixed (I'll have to run tests for another couple
> of days to be convinced).  Have libsepol developers knowingly fixed such a bug?
>
> Here's coredumpctl output from semodule (at the time libsepol wasn't compiled
> with debugging symbols):
>
> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
> Core was generated by `/usr/sbin/semodule -i toadd.pp'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  __strlen_sse2 () at ../sysdeps/x86_64/multiarch/../strlen.S:120
> 120     ../sysdeps/x86_64/multiarch/../strlen.S: No such file or directory.
> (gdb) bt
> #0  __strlen_sse2 () at ../sysdeps/x86_64/multiarch/../strlen.S:120
> #1  0x00007ff2128cf756 in __vfprintf_internal (s=s@entry=0x7ffecc31daa0,
>     format=format@entry=0x7ff212af88f9 "Error: Unknown keyword %s\n",
>     ap=ap@entry=0x7ffecc31de40, mode_flags=mode_flags@entry=2)
>     at vfprintf-internal.c:1688
> #2  0x00007ff2128e11f6 in __vsnprintf_internal (
>     string=0x7ffecc31dc20 "Error: Unknown keyword ", maxlen=<optimized out>,
>     format=0x7ff212af88f9 "Error: Unknown keyword %s\n", args=0x7ffecc31de40,
>     mode_flags=2) at vsnprintf.c:114
>
> Here's one from load_policy which I believe is related.  Running semodule -i
> repeatedly on the same file doesn't seem to cause a problem (I've had a loop of
> that run for hours without a SEGV) but it happened quickly when alternately
> loading 2 slightly different files.
>
>   Command Line: /sbin/load_policy
>     Executable: /usr/sbin/load_policy
>        Boot ID: 8727799a8e0b44f1885f1b4c681efea9
>     Machine ID: 384a085cdf4a437cae153168e34245f4
>       Hostname: play
>        Storage: /var/lib/systemd/coredump/core.load_policy.
> 0.8727799a8e0b44f188>
>        Message: Process 70655 (load_policy) of user 0 dumped core.
>
>                 Stack trace of thread 70655:
>                 #0  0x00007f0716a6685d ebitmap_destroy (libsepol.so.1 +
> 0x1185d)
>                 #1  0x00007f0716a635eb constraint_expr_destroy (libsepol.so.1
> +>
>                 #2  0x00007f0716aa7d71 class_destroy (libsepol.so.1 + 0x52d71)
>                 #3  0x00007f0716a73893 hashtab_map (libsepol.so.1 + 0x1e893)
>                 #4  0x00007f0716aa86b6 symtabs_destroy (libsepol.so.1 +
> 0x536b6)
>                 #5  0x00007f0716aa822b policydb_destroy (libsepol.so.1 +
> 0x5322>
>                 #6  0x00007f0716ab091a policydb_to_image (libsepol.so.1 +
> 0x5b9>
>                 #7  0x00007f0716ab0e08 sepol_policydb_to_image (libsepol.so.1
> +>
>                 #8  0x00007f0716a3eadc selinux_mkload_policy (libselinux.so.1
> +>
>                 #9  0x00005560e76d12bf n/a (load_policy + 0x12bf)
>                 #10 0x00007f071688de0b __libc_start_main (libc.so.6 + 0x26e0b)
>                 #11 0x00005560e76d134a n/a (load_policy + 0x134a)
[...]

Hello,
This looks a pretty difficult issue. The facts that it is not easily
reproducible and that the stack trace changes even though the 2
modules you are testing do not are interesting. They imply that there
is some randomness involved. As far as I remember the code I've read
so far, SELinux's userspace utilities written in C do not use random
numbers. So this non-reproducibility could be caused by something
else, like the order in which files are listed in directories in your
filesystem (for example in /var/lib/selinux...) or the ASLR (Address
Space Layout Randomization).

The first trace seems to hint a buffer overflow. A failure in
ebitmap_destroy() when destructing a policydb object (with
policydb_destroy()) is likely to mean that the object was corrupted in
some way. This makes the hypothesis "you don't have reproducibility
because of ASLR" likely, if for example pointers get used and the
execution path changes depending on their raw values.
In order to test this hypothesis, could you run the while loop with
ASLR disabled ? For example with "setarch $(uname -m) -R semodule -i
pol/toadd.pp"? Does it continue to fail randomly?

In order to test whether this bug is a buffer overflow, another thing
you could do would be to recompile semodule, libsepol and libsemanage
with the Address Sanitizer (for example by cloning the git repository
at the 3.0 tag, running "make DESTDIR=$HOME/selinux-asan CC='gcc
-no-pie -fsanitize=address' install" and configuring your
LD_LIBRARY_PATH and PATH to the newly built files). This might show
where a buffer overflow occurs.

For "Have libsepol developers knowingly fixed such a bug?", recent
commits changed a few things in libsepol's internals and I do not know
of commits that would specifically fix the bug you have.

Best,
Nicolas


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: semodule -i and load_policy coredumps on version 3.0 - not latest GIT
  2020-04-14 17:27 ` Nicolas Iooss
@ 2020-04-15 17:17   ` Russell Coker
  2020-04-21  4:01   ` Russell Coker
  1 sibling, 0 replies; 5+ messages in thread
From: Russell Coker @ 2020-04-15 17:17 UTC (permalink / raw)
  To: Nicolas Iooss; +Cc: SElinux list, Laurent Bigonville

On Wednesday, 15 April 2020 3:27:38 AM AEST Nicolas Iooss wrote:
> This looks a pretty difficult issue. The facts that it is not easily
> reproducible and that the stack trace changes even though the 2
> modules you are testing do not are interesting. They imply that there
> is some randomness involved. As far as I remember the code I've read

I'm still debugging this.  My first belief that the bug was fixed in the latest 
git seems incorrect.  I've now got a collection of valgrind logs from libsepol 
from git revision 5447c8490b318ef64c61eb6022baddca69233733 (latest as of 
yesterday afternoon).  I presume that the valgrind logs of "Conditional jump 
or move depends on uninitialised value(s)" corresponds to a SEGV when not 
running valgrind but haven't proven that yet.

I have not yet worked out how to reproduce the bug on another system.  
Nickolas, thanks for all your suggestions I will go back to them once I have 
more test results.

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: semodule -i and load_policy coredumps on version 3.0 - not latest GIT
  2020-04-14 17:27 ` Nicolas Iooss
  2020-04-15 17:17   ` Russell Coker
@ 2020-04-21  4:01   ` Russell Coker
  2020-04-22 10:49     ` Russell Coker
  1 sibling, 1 reply; 5+ messages in thread
From: Russell Coker @ 2020-04-21  4:01 UTC (permalink / raw)
  To: Nicolas Iooss; +Cc: SElinux list, Laurent Bigonville

On Wednesday, 15 April 2020 3:27:38 AM AEST Nicolas Iooss wrote:
> This looks a pretty difficult issue. The facts that it is not easily
> reproducible and that the stack trace changes even though the 2
> modules you are testing do not are interesting. They imply that there

I have done more further testing.

I could not reproduce it on another VM on the same hardware.

I could not reproduce it on the same VM after a reboot of the physical 
hardware (running Debian/Unstable with KVM).

After the reboot I could not reproduce it on saved snapshots of the VM in 
question dating back to when I had previously had problems.  I conclude that 
rebooting the hardware solved the problem.

The problem was either an issue of failing hardware (I am running memtest86+ 
right now) or hostile action.  When testing for issues with libsepol I got a 
couple of coredumps from valgrind, that isn't necessarily an indication of 
anything (valgrind is complex software and it provides information on how to 
report bugs when it crashes so crashes of valgrind aren't unexpected).  I also 
got one coredump from sshd which is very unexpected, sshd is known to be high 
quality software that is well written and well audited.  This makes me wonder 
whether there is some commonality between sshd and semodule that causes both 
of them to have had problems on the system in question.  For background the 
sshd coredump info is below.

# coredumpctl info /usr/sbin/sshd
           PID: 42696 (sshd)
           UID: 0 (root)
           GID: 0 (root)
        Signal: 11 (SEGV)
     Timestamp: Tue 2020-04-14 19:48:42 UTC (6 days ago)
  Command Line: sshd: [accepted]
    Executable: /usr/sbin/sshd
       Boot ID: eec56f683e7b4aeb90a89845bd7920f8
    Machine ID: 384a085cdf4a437cae153168e34245f4
      Hostname: play
       Storage: /var/lib/systemd/coredump/core.sshd.
0.eec56f683e7b4aeb90a89845bd7920f8.42696.1586893722000000000000.lz4
       Message: Process 42696 (sshd) of user 0 dumped core.
                
                Stack trace of thread 42696:
                #0  0x00007f2dfe8da2e7 dl_new_hash (ld-linux-x86-64.so.2 + 
0xa2e7)
                #1  0x00007f2dfe8deaf3 _dl_fixup (ld-linux-x86-64.so.2 + 
0xeaf3)
                #2  0x00007f2dfe8e5383 _dl_runtime_resolve_fxsave (ld-linux-
x86-64.so.2 + 0x15383)
                #3  0x00007f2dfe1453e0 n/a (libcap-ng.so.0 + 0x23e0)
                #4  0x00007f2dfe1e9c78 __run_fork_handlers (libc.so.6 + 
0x84c78)
                #5  0x00007f2dfe22ffb8 __libc_fork (libc.so.6 + 0xcafb8)
                #6  0x000055ab9ae2bac9 n/a (sshd + 0xfac9)
                #7  0x00007f2dfe18be0b __libc_start_main (libc.so.6 + 0x26e0b)
                #8  0x000055ab9ae2bf7a n/a (sshd + 0xff7a)

I ran the spectre-meltdown-checker script, it says that the physical hardware 
in question is vulnerable to the following (there doesn't seem to be microcode 
updates for the Q9505 CPU to fix all the issues):

CVE-2018-3640 aka 'Variant 3a, rogue system register read'
* CPU microcode mitigates the vulnerability:  NO 
> STATUS:  VULNERABLE  (an up-to-date CPU microcode is needed to mitigate this 
vulnerability)

CVE-2018-3639 aka 'Variant 4, speculative store bypass'
* Mitigated according to the /sys interface:  NO  (Vulnerable)
* Kernel supports disabling speculative store bypass (SSB):  YES  (found in /
proc/self/status)
* SSB mitigation is enabled and active:  NO 
> STATUS:  VULNERABLE  (Your CPU doesn't support SSBD)

CVE-2018-12126 aka 'Fallout, microarchitectural store buffer data sampling 
(MSBDS)'
* Mitigated according to the /sys interface:  NO  (Vulnerable: Clear CPU 
buffers attempted, no microcode; SMT disabled)
* Kernel supports using MD_CLEAR mitigation:  YES  (found md_clear 
implementation evidence in kernel image)
* Kernel mitigation is enabled and active:  NO 
* SMT is either mitigated or disabled:  YES 
> STATUS:  VULNERABLE  (Your kernel supports mitigation, but your CPU 
microcode also needs to be updated to mitigate the vulnerability)

CVE-2018-12130 aka 'ZombieLoad, microarchitectural fill buffer data sampling 
(MFBDS)'
* Mitigated according to the /sys interface:  NO  (Vulnerable: Clear CPU 
buffers attempted, no microcode; SMT disabled)
* Kernel supports using MD_CLEAR mitigation:  YES  (found md_clear 
implementation evidence in kernel image)
* Kernel mitigation is enabled and active:  NO 
* SMT is either mitigated or disabled:  YES 
> STATUS:  VULNERABLE  (Your kernel supports mitigation, but your CPU 
microcode also needs to be updated to mitigate the vulnerability)

CVE-2018-12127 aka 'RIDL, microarchitectural load port data sampling (MLPDS)'
* Mitigated according to the /sys interface:  NO  (Vulnerable: Clear CPU 
buffers attempted, no microcode; SMT disabled)
* Kernel supports using MD_CLEAR mitigation:  YES  (found md_clear 
implementation evidence in kernel image)
* Kernel mitigation is enabled and active:  NO 
* SMT is either mitigated or disabled:  YES 
> STATUS:  VULNERABLE  (Your kernel supports mitigation, but your CPU 
microcode also needs to be updated to mitigate the vulnerability)

CVE-2019-11091 aka 'RIDL, microarchitectural data sampling uncacheable memory 
(MDSUM)'
* Mitigated according to the /sys interface:  NO  (Vulnerable: Clear CPU 
buffers attempted, no microcode; SMT disabled)
* Kernel supports using MD_CLEAR mitigation:  YES  (found md_clear 
implementation evidence in kernel image)
* Kernel mitigation is enabled and active:  NO 
* SMT is either mitigated or disabled:  YES 
> STATUS:  VULNERABLE  (Your kernel supports mitigation, but your CPU 
microcode also needs to be updated to mitigate the vulnerability)

Given that the system is vulnerable to certain known attacks and that sshd is 
a prime target for any such attack I believe that the sshd SEGV is an 
indication that the root cause might have been hostile action.  I don't expect 
to ever have proof of what was the cause (unless memtest86+ flags an error).  
When hostile activity goes away on reboot then something memory resident is 
likely in which case there's probably no record on disk.

I am convinced beyond all reasonable doubt that the SEGVs and valgrind 
warnings I saw from semodule were not evidence of a bug in libsepol.

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: semodule -i and load_policy coredumps on version 3.0 - not latest GIT
  2020-04-21  4:01   ` Russell Coker
@ 2020-04-22 10:49     ` Russell Coker
  0 siblings, 0 replies; 5+ messages in thread
From: Russell Coker @ 2020-04-22 10:49 UTC (permalink / raw)
  To: Nicolas Iooss; +Cc: SElinux list, Laurent Bigonville

On Tuesday, 21 April 2020 2:01:45 PM AEST Russell Coker wrote:
> After the reboot I could not reproduce it on saved snapshots of the VM in
> question dating back to when I had previously had problems.  I conclude that
> rebooting the hardware solved the problem.
> 
> The problem was either an issue of failing hardware (I am running memtest86+
> right now) or hostile action.  When testing for issues with libsepol I got

Memtest86+ has proven that the system in question had a damaged motherboard, 
any time when any DIMM socket other than socket 1 was in use Memtest86+ would 
lock up solid in less than 7 seconds (for reference a complete Memtest86+ run 
was successful at the time the system was deployed).  The system is on the e-
waste pile and I don't expect to see such semodule problems again.

How the system in question managed to boot Linux and run multiple VMs while 
Memtest86+ crashed so soon remains a mystery to me.

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/




^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-04-22 10:49 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-14  0:29 semodule -i and load_policy coredumps on version 3.0 - not latest GIT Russell Coker
2020-04-14 17:27 ` Nicolas Iooss
2020-04-15 17:17   ` Russell Coker
2020-04-21  4:01   ` Russell Coker
2020-04-22 10:49     ` Russell Coker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).