All of lore.kernel.org
 help / color / mirror / Atom feed
* random crashes, kdump and so on
@ 2019-03-25 11:48 Reindl Harald
  2019-03-25 19:07 ` Cong Wang
  0 siblings, 1 reply; 8+ messages in thread
From: Reindl Harald @ 2019-03-25 11:48 UTC (permalink / raw)
  To: Linux Kernel Network Developers

besides that i get tired about random crashes over the last months (yeah
the connlimit crashes are fixed in the meantime but there is still
something broken) which are pretty sure in the netedev/netfilter area
and "kernel.panic = 1" is not a persistent solution

what in the world makes kdump on a VM with 2.5 GB RAM dump out 5.4GB and
why do you need a handful reboots to get rid of "Can't find kernel text
map area from kcore" when try to start the kdump service?

why can't the kernel just write out what it normally prints on the
screen to a fixed device like /dev/sdc without that whole dance, no
filesystem needed, just write it out like d and reboot

sdc is stable on a VM and the terminal output has cutted every relevant
information when you wait for HA of the hypervisor make a screenshot
before hard reset instead the automatic reboot from the guest

can we please get Linux as stable as it was or better to debug in
production so that one can submit useful infos in bugreports?

[root@localhost:/var/crash/127.0.0.1-2019-03-25-10:34:04]$  ls
total 5.4G
drwxr-xr-x 2 root root 4.0K 2019-03-25 10:35 .
drwxr-xr-x 3 root root 4.0K 2019-03-25 10:34 ..
-rw------- 1 root root    0 2019-03-25 10:35 vmcore-incomplete
-rw-r--r-- 1 root root 5.4G 2019-03-25 10:35 vmcore-dmesg-incomplete.txt

[root@localhost:/var/crash/127.0.0.1-2019-03-25-10:34:04]$  df
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sdb1      ext4  5.8G  5.8G     0 100% /
/dev/sda1      ext4  485M   51M  431M  11% /boot

this seems to be still an issue
http://lkml.iu.edu/hypermail/linux/kernel/1310.2/01470.html

[root@localhost:~]$  systemctl status kdump
● kdump.service - Crash recovery kernel arming
   Loaded: loaded (/etc/systemd/system/kdump.service; disabled; vendor
preset: disabled)
   Active: failed (Result: exit-code) since Mon 2019-03-25 12:33:07 CET;
5s ago
  Process: 25021 ExecStart=/usr/bin/kdumpctl start (code=exited,
status=1/FAILURE)
 Main PID: 25021 (code=exited, status=1/FAILURE)

Mar 25 12:33:05 localhost dracut[26225]: No dracut internal kernel
commandline stored in the initramfs
Mar 25 12:33:05 localhost dracut[26225]: *** Creating image file
'/boot/initramfs-4.20.17-100.fc28.x86_64kdump.img' ***
Mar 25 12:33:07 localhost dracut[26225]: *** Creating initramfs image
file '/boot/initramfs-4.20.17-100.fc28.x86_64kdump.img' done ***
Mar 25 12:33:07 localhost kdumpctl[25021]: Can't find kernel text map
area from kcore
Mar 25 12:33:07 localhost kdumpctl[25021]: Cannot load
/boot/vmlinuz-4.20.17-100.fc28.x86_64
Mar 25 12:33:07 localhost kdumpctl[25021]: kexec: failed to load kdump
kernel
Mar 25 12:33:07 localhost kdumpctl[25021]: Starting kdump: [FAILED]
Mar 25 12:33:07 localhost systemd[1]: kdump.service: Main process
exited, code=exited, status=1/FAILURE
Mar 25 12:33:07 localhost systemd[1]: kdump.service: Failed with result
'exit-code'.
Mar 25 12:33:07 localhost systemd[1]: Failed to start Crash recovery
kernel arming.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: random crashes, kdump and so on
  2019-03-25 11:48 random crashes, kdump and so on Reindl Harald
@ 2019-03-25 19:07 ` Cong Wang
  2019-03-25 21:37   ` Reindl Harald
  0 siblings, 1 reply; 8+ messages in thread
From: Cong Wang @ 2019-03-25 19:07 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Linux Kernel Network Developers

On Mon, Mar 25, 2019 at 5:08 AM Reindl Harald <h.reindl@thelounge.net> wrote:
>
> besides that i get tired about random crashes over the last months (yeah
> the connlimit crashes are fixed in the meantime but there is still
> something broken) which are pretty sure in the netedev/netfilter area
> and "kernel.panic = 1" is not a persistent solution
>
> what in the world makes kdump on a VM with 2.5 GB RAM dump out 5.4GB and
> why do you need a handful reboots to get rid of "Can't find kernel text
> map area from kcore" when try to start the kdump service?

Possibly because of KASLR, please report this to kexec-tools mailing
list. This looks more like a kexec-tools bug than a kernel bug.


>
> why can't the kernel just write out what it normally prints on the
> screen to a fixed device like /dev/sdc without that whole dance, no
> filesystem needed, just write it out like d and reboot

It can, but many times stack traces are not sufficient for debugging
a kernel crash. This is why kdump saves the whole memory.


>
> sdc is stable on a VM and the terminal output has cutted every relevant
> information when you wait for HA of the hypervisor make a screenshot
> before hard reset instead the automatic reboot from the guest
>
> can we please get Linux as stable as it was or better to debug in
> production so that one can submit useful infos in bugreports?


Switch to a stable distro, like CentOS or Debian stable. If you use
Fedora 28, it is expected to be not that stable (relatively).

Thanks.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: random crashes, kdump and so on
  2019-03-25 19:07 ` Cong Wang
@ 2019-03-25 21:37   ` Reindl Harald
  2019-03-25 21:58     ` Cong Wang
  0 siblings, 1 reply; 8+ messages in thread
From: Reindl Harald @ 2019-03-25 21:37 UTC (permalink / raw)
  To: Cong Wang; +Cc: Linux Kernel Network Developers



Am 25.03.19 um 20:07 schrieb Cong Wang:
> On Mon, Mar 25, 2019 at 5:08 AM Reindl Harald <h.reindl@thelounge.net> wrote:
>>
>> besides that i get tired about random crashes over the last months (yeah
>> the connlimit crashes are fixed in the meantime but there is still
>> something broken) which are pretty sure in the netedev/netfilter area
>> and "kernel.panic = 1" is not a persistent solution
>>
>> what in the world makes kdump on a VM with 2.5 GB RAM dump out 5.4GB and
>> why do you need a handful reboots to get rid of "Can't find kernel text
>> map area from kcore" when try to start the kdump service?
> 
> Possibly because of KASLR, please report this to kexec-tools mailing
> list. This looks more like a kexec-tools bug than a kernel bug

as you can see in my post i linked a similar discussion pointing that
out from years ago
>> why can't the kernel just write out what it normally prints on the
>> screen to a fixed device like /dev/sdc without that whole dance, no
>> filesystem needed, just write it out like d and reboot
> 
> It can, but many times stack traces are not sufficient for debugging
> a kernel crash. This is why kdump saves the whole memory.

and *how* can it without kdump?

fact is that there is no sane reason on a machine with 2.5 GB RAM dump
out 5.4 GB until the rootfs is full

frankly it would be even helpfull *reverse* the stacktrace on the VT so
that one can see the entry point instead a "not syncing, expection in
interrupt" given that the VT on most virtual machines is way too small
and no you don#t want graphic drivers and what not on virtual servers

>> sdc is stable on a VM and the terminal output has cutted every relevant
>> information when you wait for HA of the hypervisor make a screenshot
>> before hard reset instead the automatic reboot from the guest
>>
>> can we please get Linux as stable as it was or better to debug in
>> production so that one can submit useful infos in bugreports?
> 
> 
> Switch to a stable distro, like CentOS or Debian stable. If you use
> Fedora 28, it is expected to be not that stable (relatively)
sorry but that is nonsense, don't tell me "switch to a stable distro"
after more than 10 years Fedora in production, especially don't tell on
kernel.org "use some outdated crap full of backports" especially on a
setup doing nothing than iptables

fact is that around 4.19.x the kernel had a ton of issues starting with
conncount broken over months (again: with a simple method get the
stacktrace it would have been easily discovered), the scheduler issue in
4.19.x eating peoples data and so on

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: random crashes, kdump and so on
  2019-03-25 21:37   ` Reindl Harald
@ 2019-03-25 21:58     ` Cong Wang
  2019-03-25 22:10       ` Reindl Harald
  0 siblings, 1 reply; 8+ messages in thread
From: Cong Wang @ 2019-03-25 21:58 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Linux Kernel Network Developers

On Mon, Mar 25, 2019 at 2:37 PM Reindl Harald <h.reindl@thelounge.net> wrote:
>
>
>
> Am 25.03.19 um 20:07 schrieb Cong Wang:
> > On Mon, Mar 25, 2019 at 5:08 AM Reindl Harald <h.reindl@thelounge.net> wrote:
> >>
> >> besides that i get tired about random crashes over the last months (yeah
> >> the connlimit crashes are fixed in the meantime but there is still
> >> something broken) which are pretty sure in the netedev/netfilter area
> >> and "kernel.panic = 1" is not a persistent solution
> >>
> >> what in the world makes kdump on a VM with 2.5 GB RAM dump out 5.4GB and
> >> why do you need a handful reboots to get rid of "Can't find kernel text
> >> map area from kcore" when try to start the kdump service?
> >
> > Possibly because of KASLR, please report this to kexec-tools mailing
> > list. This looks more like a kexec-tools bug than a kernel bug
>
> as you can see in my post i linked a similar discussion pointing that
> out from years ago


Not surprised, we saw and fixed a similar issue with our kexec-tools,
it is very possible the same issue re-surface again because of either
a newer kernel or a newer kexec-tools.


> >> why can't the kernel just write out what it normally prints on the
> >> screen to a fixed device like /dev/sdc without that whole dance, no
> >> filesystem needed, just write it out like d and reboot
> >
> > It can, but many times stack traces are not sufficient for debugging
> > a kernel crash. This is why kdump saves the whole memory.
>
> and *how* can it without kdump?


For instance, netconsole.


>
> fact is that there is no sane reason on a machine with 2.5 GB RAM dump
> out 5.4 GB until the rootfs is full


You could choose to save dmesg only, if this is what you prefer. Unless
your kernel log is flooded, you won't need so much disk space if you
only save dmesg. (Kernel log can be flooded, for example, when you
have a bad disk, by the way.)


>
> frankly it would be even helpfull *reverse* the stacktrace on the VT so
> that one can see the entry point instead a "not syncing, expection in
> interrupt" given that the VT on most virtual machines is way too small
> and no you don#t want graphic drivers and what not on virtual servers

Try some console server or netconsole.


>
> >> sdc is stable on a VM and the terminal output has cutted every relevant
> >> information when you wait for HA of the hypervisor make a screenshot
> >> before hard reset instead the automatic reboot from the guest
> >>
> >> can we please get Linux as stable as it was or better to debug in
> >> production so that one can submit useful infos in bugreports?
> >
> >
> > Switch to a stable distro, like CentOS or Debian stable. If you use
> > Fedora 28, it is expected to be not that stable (relatively)
> sorry but that is nonsense, don't tell me "switch to a stable distro"
> after more than 10 years Fedora in production, especially don't tell on
> kernel.org "use some outdated crap full of backports" especially on a
> setup doing nothing than iptables

Sure, good luck. I use Fedora too as my personal development work
station, in case you think I am biased.


>
> fact is that around 4.19.x the kernel had a ton of issues starting with
> conncount broken over months (again: with a simple method get the
> stacktrace it would have been easily discovered), the scheduler issue in
> 4.19.x eating peoples data and so on

If kexec-tools doesn't work for you, try something else like netconsole
to save the stack traces. Again, depends on the type of crash, just stack
trace may not even be enough to debugging it. Of course, having a
stack trace is still much better than having nothing.

Thanks.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: random crashes, kdump and so on
  2019-03-25 21:58     ` Cong Wang
@ 2019-03-25 22:10       ` Reindl Harald
  2019-04-09  2:22         ` Reindl Harald
  0 siblings, 1 reply; 8+ messages in thread
From: Reindl Harald @ 2019-03-25 22:10 UTC (permalink / raw)
  To: Cong Wang; +Cc: Linux Kernel Network Developers



Am 25.03.19 um 22:58 schrieb Cong Wang:
> On Mon, Mar 25, 2019 at 2:37 PM Reindl Harald <h.reindl@thelounge.net> wrote:
>>
>> Am 25.03.19 um 20:07 schrieb Cong Wang:
>>> On Mon, Mar 25, 2019 at 5:08 AM Reindl Harald <h.reindl@thelounge.net> wrote:
>>>>
>>>> besides that i get tired about random crashes over the last months (yeah
>>>> the connlimit crashes are fixed in the meantime but there is still
>>>> something broken) which are pretty sure in the netedev/netfilter area
>>>> and "kernel.panic = 1" is not a persistent solution
>>>>
>>>> what in the world makes kdump on a VM with 2.5 GB RAM dump out 5.4GB and
>>>> why do you need a handful reboots to get rid of "Can't find kernel text
>>>> map area from kcore" when try to start the kdump service?
>>>
>>> Possibly because of KASLR, please report this to kexec-tools mailing
>>> list. This looks more like a kexec-tools bug than a kernel bug
>>
>> as you can see in my post i linked a similar discussion pointing that
>> out from years ago
> 
> 
> Not surprised, we saw and fixed a similar issue with our kexec-tools,
> it is very possible the same issue re-surface again because of either
> a newer kernel or a newer kexec-tools.

sad...


>>>> why can't the kernel just write out what it normally prints on the
>>>> screen to a fixed device like /dev/sdc without that whole dance, no
>>>> filesystem needed, just write it out like d and reboot
>>>
>>> It can, but many times stack traces are not sufficient for debugging
>>> a kernel crash. This is why kdump saves the whole memory.
>>
>> and *how* can it without kdump?
> 
> 
> For instance, netconsole.

with a kernel panic in the network layer?

>> fact is that there is no sane reason on a machine with 2.5 GB RAM dump
>> out 5.4 GB until the rootfs is full
> 
> 
> You could choose to save dmesg only, if this is what you prefer. Unless
> your kernel log is flooded, you won't need so much disk space if you
> only save dmesg. (Kernel log can be flooded, for example, when you
> have a bad disk, by the way.)

it would be so cool when people instead "you could" tell how you could,
frankly if it would be obvious i would have configured it already that
way :-)

bad disks is impossible on a VM hosted on a shared SAN or at least when
the SAN starts to fire problems the default gateway of the network is no
longer that important.....

on the other hand it looked like dmesg was that large but how can it
when the VM has only 2.5 GB RAM, as i noticed that before delete the
stuff to avoid another crash by the full disk i did a tail on that file
and saw iptables logs which are strictly ratelimited, but god knows what
the kernel does in a panic event.....

-rw------- 1 root root    0 2019-03-25 10:35 vmcore-incomplete
-rw-r--r-- 1 root root 5.4G 2019-03-25 10:35 vmcore-dmesg-incomplete.txt

>> frankly it would be even helpfull *reverse* the stacktrace on the VT so
>> that one can see the entry point instead a "not syncing, expection in
>> interrupt" given that the VT on most virtual machines is way too small
>> and no you don#t want graphic drivers and what not on virtual servers
> 
> Try some console server or netconsole.

VMware guests, crash in the network layer

>>>> sdc is stable on a VM and the terminal output has cutted every relevant
>>>> information when you wait for HA of the hypervisor make a screenshot
>>>> before hard reset instead the automatic reboot from the guest
>>>>
>>>> can we please get Linux as stable as it was or better to debug in
>>>> production so that one can submit useful infos in bugreports?
>>>
>>>
>>> Switch to a stable distro, like CentOS or Debian stable. If you use
>>> Fedora 28, it is expected to be not that stable (relatively)
>> sorry but that is nonsense, don't tell me "switch to a stable distro"
>> after more than 10 years Fedora in production, especially don't tell on
>> kernel.org "use some outdated crap full of backports" especially on a
>> setup doing nothing than iptables
> 
> Sure, good luck. I use Fedora too as my personal development work
> station, in case you think I am biased.

good

>> fact is that around 4.19.x the kernel had a ton of issues starting with
>> conncount broken over months (again: with a simple method get the
>> stacktrace it would have been easily discovered), the scheduler issue in
>> 4.19.x eating peoples data and so on
> 
> If kexec-tools doesn't work for you, try something else like netconsole
> to save the stack traces. Again, depends on the type of crash, just stack
> trace may not even be enough to debugging it. Of course, having a
> stack trace is still much better than having nothing.

for now it looks that the tonights 5.0.4 F29 build works without the
random crashes, kdump this time also didn't refuse to start and
/var/crash is now a dedicated virtual disk with 3 GB

fingers crossing, after the last days this looks good at fierst sight,
on the oher hand there where days up to weeks with no panic, so god knows

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: random crashes, kdump and so on
  2019-03-25 22:10       ` Reindl Harald
@ 2019-04-09  2:22         ` Reindl Harald
  2019-04-09  3:41           ` Cong Wang
  0 siblings, 1 reply; 8+ messages in thread
From: Reindl Harald @ 2019-04-09  2:22 UTC (permalink / raw)
  To: Cong Wang; +Cc: Linux Kernel Network Developers



Am 25.03.19 um 23:10 schrieb Reindl Harald:
>>> fact is that around 4.19.x the kernel had a ton of issues starting with
>>> conncount broken over months (again: with a simple method get the
>>> stacktrace it would have been easily discovered), the scheduler issue in
>>> 4.19.x eating peoples data and so on
>>
>> If kexec-tools doesn't work for you, try something else like netconsole
>> to save the stack traces. Again, depends on the type of crash, just stack
>> trace may not even be enough to debugging it. Of course, having a
>> stack trace is still much better than having nothing.
> 
> for now it looks that the tonights 5.0.4 F29 build works without the
> random crashes, kdump this time also didn't refuse to start and
> /var/crash is now a dedicated virtual disk with 3 GB
> 
> fingers crossing, after the last days this looks good at fierst sight,
> on the oher hand there where days up to weeks with no panic, so god knows

after two weeks and 27 Mio. accepted connections 5.0.4 crashed too

"vmcore-dmesg" piped through "sort | uniq" is reduced to 399 lines
containing just rate-limited "-j LOG" iptables events and nothing else
repeatet 32487 times until the dedicated virtual disk was full

what a mess.....

-rw------- 1 harry verwaltung    0 2019-04-09 03:01 vmcore-incomplete
-rw-r----- 1 harry verwaltung  93K 2019-04-09 03:09 filtered.txt
-rw-r----- 1 harry verwaltung 2,9G 2019-04-09 03:01
vmcore-dmesg-incomplete.txt

cat vmcore-dmesg-incomplete.txt | grep "1248098\.543887" | wc -l
32487

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: random crashes, kdump and so on
  2019-04-09  2:22         ` Reindl Harald
@ 2019-04-09  3:41           ` Cong Wang
  2019-04-09  4:02             ` Reindl Harald
  0 siblings, 1 reply; 8+ messages in thread
From: Cong Wang @ 2019-04-09  3:41 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Linux Kernel Network Developers

On Mon, Apr 8, 2019 at 7:22 PM Reindl Harald <h.reindl@thelounge.net> wrote:
>
>
>
> Am 25.03.19 um 23:10 schrieb Reindl Harald:
> >>> fact is that around 4.19.x the kernel had a ton of issues starting with
> >>> conncount broken over months (again: with a simple method get the
> >>> stacktrace it would have been easily discovered), the scheduler issue in
> >>> 4.19.x eating peoples data and so on
> >>
> >> If kexec-tools doesn't work for you, try something else like netconsole
> >> to save the stack traces. Again, depends on the type of crash, just stack
> >> trace may not even be enough to debugging it. Of course, having a
> >> stack trace is still much better than having nothing.
> >
> > for now it looks that the tonights 5.0.4 F29 build works without the
> > random crashes, kdump this time also didn't refuse to start and
> > /var/crash is now a dedicated virtual disk with 3 GB
> >
> > fingers crossing, after the last days this looks good at fierst sight,
> > on the oher hand there where days up to weeks with no panic, so god knows
>
> after two weeks and 27 Mio. accepted connections 5.0.4 crashed too
>
> "vmcore-dmesg" piped through "sort | uniq" is reduced to 399 lines
> containing just rate-limited "-j LOG" iptables events and nothing else
> repeatet 32487 times until the dedicated virtual disk was full
>
> what a mess.....
>
> -rw------- 1 harry verwaltung    0 2019-04-09 03:01 vmcore-incomplete
> -rw-r----- 1 harry verwaltung  93K 2019-04-09 03:09 filtered.txt
> -rw-r----- 1 harry verwaltung 2,9G 2019-04-09 03:01
> vmcore-dmesg-incomplete.txt
>
> cat vmcore-dmesg-incomplete.txt | grep "1248098\.543887" | wc -l
> 32487

Not surprised, we saw TB sized vmcore dmesg in our data center
due to disk errors flood.

I don't look into it, but it looks like a bug somewhere. Even we have
the default printk buffer size, the dmesg should not be so huge.
A blind guess would be something wrong in /proc/vmcore notes.

Did your kernel crash happen before or after the flooded iptables
log? Kernel is supposed to jump to the crash kernel immediately
after crash, so if not it could be a kernel kexec bug.

Thanks.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: random crashes, kdump and so on
  2019-04-09  3:41           ` Cong Wang
@ 2019-04-09  4:02             ` Reindl Harald
  0 siblings, 0 replies; 8+ messages in thread
From: Reindl Harald @ 2019-04-09  4:02 UTC (permalink / raw)
  To: Cong Wang; +Cc: Linux Kernel Network Developers



Am 09.04.19 um 05:41 schrieb Cong Wang:
> On Mon, Apr 8, 2019 at 7:22 PM Reindl Harald <h.reindl@thelounge.net> wrote:
>> after two weeks and 27 Mio. accepted connections 5.0.4 crashed too
>>
>> "vmcore-dmesg" piped through "sort | uniq" is reduced to 399 lines
>> containing just rate-limited "-j LOG" iptables events and nothing else
>> repeatet 32487 times until the dedicated virtual disk was full
>>
>> what a mess.....
>>
>> -rw------- 1 harry verwaltung    0 2019-04-09 03:01 vmcore-incomplete
>> -rw-r----- 1 harry verwaltung  93K 2019-04-09 03:09 filtered.txt
>> -rw-r----- 1 harry verwaltung 2,9G 2019-04-09 03:01
>> vmcore-dmesg-incomplete.txt
>>
>> cat vmcore-dmesg-incomplete.txt | grep "1248098\.543887" | wc -l
>> 32487
> 
> Not surprised, we saw TB sized vmcore dmesg in our data center
> due to disk errors flood.
> 
> I don't look into it, but it looks like a bug somewhere. Even we have
> the default printk buffer size, the dmesg should not be so huge.
> A blind guess would be something wrong in /proc/vmcore notes.
> 
> Did your kernel crash happen before or after the flooded iptables
> log? Kernel is supposed to jump to the crash kernel immediately
> after crash, so if not it could be a kernel kexec bug.

problem is that i have no idea what is happening, why it is happening
and where it is happening and kexec was supposed to tell at least
something about it :-(

given that the virtual machine has only 2.5 GB RAM and that the always
same 399 lines of iptables log are appear 32487 times i guess kexec runs
crazy because it's impossible have a 2.9 GB dmesg

something is looping here and the end of the story is when the disk
where /var/crash is mounted is full it stops and reboots to the normal
kernel, frankly i won't have a problem with the loop and full disk when
that damned crap just would leave something useful before the loop :-(

now running 5.0.7, maybe it gets better over time, before i had enough
and set up kexec with 4.20.17 it where multiple reboots at that day but
that all is fishy, it started months ago with 4.18.x after 3 weeks
without any issue every saturday, 4.19.x at that time was completly
broken with the bug in conncount and with fingers crossed the last
4.18.x EOL kernel was up for 2 full months

sadly it was a brand new setup at that time so no idea when the root
cause was introduced to point out "guys after kernel xyz iptables /
network got fishy" and that it take shours, days and even weeks to crash
don't help anyways, i really thought "hey, whatever it was it semmes to
be gone with 5.x"

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-04-09  4:02 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-25 11:48 random crashes, kdump and so on Reindl Harald
2019-03-25 19:07 ` Cong Wang
2019-03-25 21:37   ` Reindl Harald
2019-03-25 21:58     ` Cong Wang
2019-03-25 22:10       ` Reindl Harald
2019-04-09  2:22         ` Reindl Harald
2019-04-09  3:41           ` Cong Wang
2019-04-09  4:02             ` Reindl Harald

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.