From mboxrd@z Thu Jan 1 00:00:00 1970 From: Boris Derzhavets Subject: Re: Re: 2.6.37-rc1 mainline domU - BUG: unable to handle kernel paging request Date: Mon, 15 Nov 2010 03:05:10 -0800 (PST) Message-ID: <780511.30233.qm@web56106.mail.re3.yahoo.com> References: <166134.36458.qm@web56104.mail.re3.yahoo.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1158123054==" Return-path: In-Reply-To: <166134.36458.qm@web56104.mail.re3.yahoo.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Sander Eikelenboom , Bruce Edge Cc: Jeremy Fitzhardinge , xen-devel@lists.xensource.com, Konrad Rzeszutek Wilk List-Id: xen-devel@lists.xenproject.org --===============1158123054== Content-Type: multipart/alternative; boundary="0-635643495-1289819110=:30233" --0-635643495-1289819110=:30233 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Mount with stock kernel in PV DomU - No problems --------------------------------------------------------------------- Started domain F14PV (id=3D4) =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0 [=A0=A0=A0 0.030070] PCI: Fatal: No config space access function found [=A0=A0=A0 0.142207] drivers/rtc/hctosys.c: unable to open rtc device (rtc0= ) Fedora release 14 (Laughlin) Kernel 2.6.35.6-45.fc14.x86_64 on an x86_64 (/dev/hvc0) fedora14 login: root Password:=20 Last login: Mon Nov 15 13:42:03 on hvc0 [root@fedora14 ~]# mount 192.168.1.9:/home/boris /mnt/nfs [root@fedora14 ~]#=20 ---------------------------------------------------------------------- =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 | =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 vs=A0= =20 =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 | Mount with the most recent Michael Young kernel - Crashing kernel ---------------------------------------------------------------------- Fedora release 14 (Laughlin) Kernel 2.6.37-0.1.rc1.git8.xendom0.fc14.x86_64 on an x86_64 (/dev/hvc0) fedora14 login: root Password: [=A0=A0 25.825048] eth0: no IPv6 routers present Last login: Mon Nov 15 13:48:31 on hvc0 [root@fedora14 ~]# mount 192.168.1.9:/home/boris /mnt/nfs [=A0=A0 44.240979] FS-Cache: Loaded [=A0=A0 44.275659] FS-Cache: Netfs 'nfs' registered for caching [root@fedora14 ~]#=20 ---------------------------------------------------------------------------= --- Boris. --- On Mon, 11/15/10, Boris Derzhavets wrote: From: Boris Derzhavets Subject: Re: [Xen-devel] Re: 2.6.37-rc1 mainline domU - BUG: unable to hand= le kernel paging request To: "Sander Eikelenboom" , "Bruce Edge" Cc: "Jeremy Fitzhardinge" , xen-devel@lists.xensource.com,= "Konrad Rzeszutek Wilk" Date: Monday, November 15, 2010, 3:06 AM Stack trace on f14 when working with NFS mount [=A0 218.984818] ------------[ cut here ]------------ [=A0 218.984834] kernel BUG at mm/mmap.c:2399! [=A0 218.984844] invalid opcode: 0000 [#1] SMP=20 [=A0 218.984857] last sysfs file: /sys/devices/system/cpu/cpu1/cache/index2= /shared_cpu_map [=A0 218.984872] CPU 1=20 [=A0 218.984879] Modules linked in: nfs fscache deflate zlib_deflate ctr ca= mellia cast5 rmd160 crypto_null ccm serpent blowfish twofish_generic twofis= h_x86_64 twofish_common ecb xcbc cbc sha256_generic sha512_generic des_gene= ric cryptd aes_x86_64 aes_generic ah6 ah4 esp6 esp4 xfrm4_mode_beet xfrm4_t= unnel tunnel4 xfrm4_mode_tunnel xfrm4_mode_transport xfrm6_mode_transport x= frm6_mode_ro xfrm6_mode_beet xfrm6_mode_tunnel ipcomp ipcomp6 xfrm_ipcomp x= frm6_tunnel tunnel6 af_key nfsd lockd nfs_acl auth_rpcgss exportfs=0A sunrp= c ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables i= pv6 uinput xen_netfront microcode xen_blkfront [last unloaded: scsi_wait_sc= an] [=A0 218.985011]=20 [=A0 218.985011] Pid: 1566, comm: ls Not tainted 2.6.37-0.1.rc1.git8.xendom= 0.fc14.x86_64 #1 / [=A0 218.985011] RIP: e030:[]=A0 [] exi= t_mmap+0x10c/0x119 [=A0 218.985011] RSP: e02b:ffff8800774a9e18=A0 EFLAGS: 00010202 [=A0 218.985011] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 002000000= 0000000 [=A0 218.985011] RDX: 0000000000100004 RSI: ffff8800770ea1b8 RDI: ffffea000= 1a00230 [=A0 218.985011] RBP: ffff8800774a9e48 R08: ffff88007d045108 R09: 000000000= 000005a [=A0 218.985011] R10: ffffffff8100750f R11: ffffea000182b7b0 R12: ffff88007= 7dc6300 [=A0 218.985011] R13: ffff88007fa1b1e0 R14: ffff880077dc6368 R15: 000000000= 0000001 [=A0 218.985011] FS:=A0=0A 00007f4a38dd17c0(0000) GS:ffff88007fa0d000(0000)= knlGS:0000000000000000 [=A0 218.985011] CS:=A0 e033 DS: 0000 ES: 0000 CR0: 000000008005003b [=A0 218.985011] CR2: 00007f4a380a1940 CR3: 0000000001a03000 CR4: 000000000= 0002660 [=A0 218.985011] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 000000000= 0000000 [=A0 218.985011] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 000000000= 0000400 [=A0 218.985011] Process ls (pid: 1566, threadinfo ffff8800774a8000, task f= fff880003ca47c0) [=A0 218.985011] Stack: [=A0 218.985011]=A0 000000000000006b ffff88007fa1b1e0 ffff8800774a9e38 ffff= 880077dc6300 [=A0 218.985011]=A0 ffff880077dc6440 ffff880003ca4db0 ffff8800774a9e68 ffff= ffff810505fc [=A0 218.985011]=A0 ffff880003ca47c0 ffff880077dc6300 ffff8800774a9eb8 ffff= ffff81056747 [=A0 218.985011] Call Trace: [=A0 218.985011]=A0 [] mmput+0x65/0xd8 [=A0=0A 218.985011]=A0 [] exit_mm+0x13e/0x14b [=A0 218.985011]=A0 [] do_exit+0x222/0x7c6 [=A0 218.985011]=A0 [] ? xen_restore_fl_direct_end+0x0/0x= 1 [=A0 218.985011]=A0 [] ? arch_local_irq_restore+0xb/0xd [=A0 218.985011]=A0 [] ? lockdep_sys_exit_thunk+0x35/0x67 [=A0 218.985011]=A0 [] do_group_exit+0x88/0xb6 [=A0 218.985011]=A0 [] sys_exit_group+0x17/0x1b [=A0 218.985011]=A0 [] system_call_fastpath+0x16/0x1b [=A0 218.985011] Code: 8d 7d 18 e8 c3 8a 00 00 41 c7 45 08 00 00 00 00 48 8= 9 df e8 0d e9 ff ff 48 85 c0 48 89 c3 75 f0 49 83 bc 24 98 01 00 00 00 74 0= 2 <0f> 0b 48 83 c4 18 5b 41 5c 41 5d c9 c3 55 48 89 e5 41 54 53 48=20 [=A0 218.985011] RIP=A0 []=0A exit_mmap+0x10c/0x119 [=A0 218.985011]=A0 RSP [=A0 218.985011] ---[ end trace 99b09fa378e85262 ]--- [=A0 218.985011] Fixing recursive fault but reboot is needed! Message from syslogd@fedora14 at Nov 15 11:03:20 ... =A0kernel:[=A0 218.984818] ------------[ cut here ]------------ Message from syslogd@fedora14 at Nov 15 11:03:20 ... =A0kernel:[=A0 218.984844] invalid opcode: 0000 [#1] SMP=20 Message from syslogd@fedora14 at Nov 15 11:03:20 ... =A0kernel:[=A0 218.984857] last sysfs file: /sys/devices/system/cpu/cpu1/ca= che/index2/shared_cpu_map Message from syslogd@fedora14 at Nov 15 11:03:20 ... =A0kernel:[=A0 218.985011] Stack: Message from syslogd@fedora14 at Nov 15 11:03:20 ... =A0kernel:[=A0 218.985011] Call Trace: Message from syslogd@fedora14 at Nov 15 11:03:20 ... =A0kernel:[=A0 218.985011] Code: 8d 7d 18 e8 c3=0A 8a 00 00 41 c7 45 08 00 = 00 00 00 48 89 df e8 0d e9 ff ff 48 85 c0 48 89 c3 75 f0 49 83 bc 24 98 01 = 00 00 00 74 02 <0f> 0b 48 83 c4 18 5b 41 5c 41 5d c9 c3 55 48 89 e5 41 54 5= 3 48=20 [=A0 259.093423] BUG: unable to handle kernel paging request at ffff880077d= 352a8 [=A0 259.093441] IP: [] ptep_set_access_flags+0x2b/0x51 [=A0 259.093456] PGD 1a04067 PUD 59c9067 PMD 5b88067 PTE 8010000077d35065 [=A0 259.093472] Oops: 0003 [#2] SMP=20 [=A0 259.093481] last sysfs file: /sys/devices/system/cpu/cpu1/cache/index2= /shared_cpu_map [=A0 259.093493] CPU 1=20 [=A0 259.093498] Modules linked in: nfs fscache deflate zlib_deflate ctr ca= mellia cast5 rmd160 crypto_null ccm serpent blowfish twofish_generic twofis= h_x86_64 twofish_common ecb xcbc cbc sha256_generic sha512_generic des_gene= ric cryptd aes_x86_64 aes_generic ah6 ah4 esp6 esp4 xfrm4_mode_beet xfrm4_t= unnel tunnel4 xfrm4_mode_tunnel=0A xfrm4_mode_transport xfrm6_mode_transpor= t xfrm6_mode_ro xfrm6_mode_beet xfrm6_mode_tunnel ipcomp ipcomp6 xfrm_ipcom= p xfrm6_tunnel tunnel6 af_key nfsd lockd nfs_acl auth_rpcgss exportfs sunrp= c ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables i= pv6 uinput xen_netfront microcode xen_blkfront [last unloaded: scsi_wait_sc= an] [=A0 259.093652]=20 [=A0 259.093658] Pid: 1567, comm: abrtd Tainted: G=A0=A0=A0=A0=A0 D=A0=A0= =A0=A0 2.6.37-0.1.rc1.git8.xendom0.fc14.x86_64 #1 / [=A0 259.093669] RIP: e030:[]=A0 [] pte= p_set_access_flags+0x2b/0x51 [=A0 259.093683] RSP: e02b:ffff8800770e7bf8=A0 EFLAGS: 00010202 [=A0 259.093690] RAX: 80000001bf75f101 RBX: ffff880077521400 RCX: 80000001b= f75f167 [=A0 259.093699] RDX: ffff880077d352a8 RSI: 00007fb9b9255ad0 RDI: ffff88007= 7521400 [=A0 259.093708] RBP: ffff8800770e7c28 R08:=0A 0000000000000001 R09: 158000= 0000000000 [=A0 259.093717] R10: ffffffff8100750f R11: ffff880077dc5800 R12: 00007fb9b= 9255ad0 [=A0 259.093726] R13: 0000000000000001 R14: ffff880003f2f9f8 R15: ffff88007= 7d352a8 [=A0 259.093737] FS:=A0 00007fb9b9255800(0000) GS:ffff88007fa0d000(0000) kn= lGS:0000000000000000 [=A0 259.093747] CS:=A0 e033 DS: 0000 ES: 0000 CR0: 000000008005003b [=A0 259.093755] CR2: ffff880077d352a8 CR3: 00000000043c8000 CR4: 000000000= 0002660 [=A0 259.093764] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 000000000= 0000000 [=A0 259.093773] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 000000000= 0000400 [=A0 259.093783] Process abrtd (pid: 1567, threadinfo ffff8800770e6000, tas= k ffff880003d2c7c0) [=A0 259.093800] Stack: [=A0 259.093807]=A0 ffffea00018382b0 0000000000000000 0000000000000034 0000= 000000000000 [=A0 259.093829]=A0 ffff880077521400 0000000000000000=0A ffff8800770e7cb8 f= fffffff81104a57 [=A0 259.093851]=A0 ffffffff810050a3 ffffffff00000001 ffff880004307e48 ffff= 8800770e7ca8 [=A0 259.093873] Call Trace: [=A0 259.093885]=A0 [] do_wp_page+0x241/0x53d [=A0 259.093899]=A0 [] ? xen_pte_val+0x6a/0x6c [=A0 259.093911]=A0 [] ? __raw_callee_save_xen_pte_val+0x= 11/0x1e [=A0 259.093926]=A0 [] ? xen_restore_fl_direct_end+0x0/0x= 1 [=A0 259.093941]=A0 [] ? handle_mm_fault+0x6ea/0x7af [=A0 259.093954]=A0 [] handle_mm_fault+0x73b/0x7af [=A0 259.093969]=A0 [] ? down_read_trylock+0x44/0x4e [=A0 259.093983]=A0 [] do_page_fault+0x363/0x385 [=A0 259.093996]=A0 [] ? xen_force_evtchn_callback+0xd/0x= f [=A0=0A 259.094011]=A0 [] ? check_events+0x12/0x20 [=A0 259.094025]=A0 [] ? trace_hardirqs_off_thunk+0x3a/0x= 3c [=A0 259.094039]=A0 [] page_fault+0x25/0x30 [=A0 259.094053]=A0 [] ? __put_user_4+0x1d/0x30 [=A0 259.094066]=A0 [] ? schedule_tail+0x61/0x65 [=A0 259.094079]=A0 [] ret_from_fork+0x13/0x80 [=A0 259.094089] Code: 55 48 89 e5 41 55 41 54 53 48 83 ec 18 0f 1f 44 00 0= 0 48 39 0a 48 89 fb 49 89 f4 0f 95 c0 45 85 c0 44 0f b6 e8 74 1c 84 c0 74 1= 8 <48> 89 0a 48 8b 3f 0f 1f 80 00 00 00 00 4c 89 e6 48 89 df e8 bb=20 [=A0 259.094149] RIP=A0 [] ptep_set_access_flags+0x2b/0x5= 1 [=A0 259.094149]=A0 RSP [=A0 259.094149] CR2: ffff880077d352a8 [=A0 259.094149] ---[ end trace 99b09fa378e85263=0A ]--- Message from syslogd@fedora14 at Nov 15 11:04:00 ... =A0kernel:[=A0 259.093472] Oops: 0003 [#2] SMP=20 Message from syslogd@fedora14 at Nov 15 11:04:00 ... =A0kernel:[=A0 259.093481] last sysfs file: /sys/devices/system/cpu/cpu1/ca= che/index2/shared_cpu_map Message from syslogd@fedora14 at Nov 15 11:04:00 ... =A0kernel:[=A0 259.093800] Stack: Message from syslogd@fedora14 at Nov 15 11:04:00 ... =A0kernel:[=A0 259.093873] Call Trace: Message from syslogd@fedora14 at Nov 15 11:04:00 ... =A0kernel:[=A0 259.094089] Code: 55 48 89 e5 41 55 41 54 53 48 83 ec 18 0f = 1f 44 00 00 48 39 0a 48 89 fb 49 89 f4 0f 95 c0 45 85 c0 44 0f b6 e8 74 1c = 84 c0 74 18 <48> 89 0a 48 8b 3f 0f 1f 80 00 00 00 00 4c 89 e6 48 89 df e8 b= b=20 Message from syslogd@fedora14 at Nov 15 11:04:00 ... =A0kernel:[=A0 259.094149] CR2: ffff880077d352a8 --- On Sun, 11/14/10, Bruce=0A Edge wrote: From: Bruce Edge Subject: Re: [Xen-devel] Re: 2.6.37-rc1 mainline domU - BUG: unable to hand= le kernel paging request To: "Sander Eikelenboom" Cc: "Boris Derzhavets" , xen-devel@lists.xensource.c= om, "Jeremy Fitzhardinge" , "Konrad Rzeszutek Wilk" Date: Sunday, November 14, 2010, 4:35 PM On Sun, Nov 14, 2010 at 8:56 AM, Sander Eikelenboom wrote: > Hmmm have you tried do do a lot of I/O with something else as NFS ? > That would perhaps pinpoint it to NFS doing something not completely=0A c= ompatible with Xen. I have my own suspicions regarding the more recent NFS clients. Post 10.04 Ubuntu variants do not tolerate large NFS transfers even without Xen. Any more than a few 100 Megs and you start getting 'task blocked for more than 120 sec..." messages along with stack traces showing part of the NFS call stack. Perhaps a parallel effort could be to test the 2.6.37-rc1 kernel with something other than NFS for remote filesystems. I'll see if I get the same problems with glusterfs. -Bruce > > I'm not using NFS (I still use file: based guests, and i use glusterfs (f= use based userspace cluster fs) to share diskspace to domU's via ethernet). > I tried NFS in the past, but had some troubles setting it up, and even mo= re problems with disconnects. > > I haven't seen any "unable to handle page request" problems with my mix o= f guest kernels, which includes some 2.6.37-rc1=0A kernels. > > -- > > Sander > > > > > > Sunday, November 14, 2010, 5:37:59 PM, you wrote: > >> I've tested F14 DomU (kernel vmlinuz-2.6.37-0.1.rc1.git8.xendom0.fc14.x8= 6_64) as NFS client and Xen 4.0.1 F14 Dom0 (kernel vmlinuz-2.6.32.25-172.xe= ndom0.fc14.x86_64) as NFS server . Copied 700 MB ISO images from NFS folder= at Dom0 to DomU and scp'ed them back to Dom0. During about 30 - 40 min Dom= U ran pretty stable , regardless kernel crash as "unable to handle page req= uest" was reported once by F14 DomU, but it didn't actually crash DomU. Sam= e excersises with replacement F14 by Ubuntu 10.04 Server results DomU crash= in about several minutes. Dom0's instances dual boot on same development b= ox ( Q9500,ASUS P5Q3,8GB) > >> Boris. > >> --- On Fri, 11/12/10, Konrad Rzeszutek Wilk wro= te: > >> From: Konrad Rzeszutek Wilk >> Subject: Re: [Xen-devel] Re: 2.6.37-rc1 mainline domU - BUG: unable to h= andle kernel paging request >> To: "Sander Eikelenboom" >> Cc: "Boris Derzhavets" , xen-devel@lists.xensourc= e.com, "Bruce Edge" , "Jeremy Fitzhardinge"=0A >> Date: Friday, November 12, 2010, 12:01 PM > >> On Fri, Nov 12, 2010 at 05:27:43PM +0100, Sander Eikelenboom wrote: >>> Hi Bruce, >>> >>> Perhaps handpick some kernels before and after the pulls of the xen pat= ches (pv-on-hvm etc) to begin with ? >>> When you let git choose, especially with rc-1 kernels, you will end up = with kernels in between patch series, resulting in panics. > >> Well, just the bare-bone boot of PV guests with nothing fancy ought to w= ork. > >> But that is the theory and .. >>> > The git bisecting is slow going. I've never tried that before and I'm= a git >>> > rookie. >>> > I picked 2.6.36 - 2.6.37-rc1 as the bisect range and my first 2 bisec= ts all >>> > panic at boot so I'm=0A obviously doing something wrong. >>> > I'll RTFM a bit more and keep at it. > >> .. as Bruce experiences this is not the case. Hmm.. > >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel > > > >> > > > > -- > Best regards, > =A0Sander =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0mailto:l= inux@eikelenboom.it > > _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel =0A=0A =20 -----Inline Attachment Follows----- _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel =0A=0A=0A --0-635643495-1289819110=:30233 Content-Type: text/html; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable

Mount with stock kernel in PV DomU - No p= roblems
----------------------------------------------------------------= -----
Started domain F14PV (id=3D4)
     &nb= sp;            =          [    0.0300= 70] PCI: Fatal: No config space access function found
[   = ; 0.142207] drivers/rtc/hctosys.c: unable to open rtc device (rtc0)

= Fedora release 14 (Laughlin)
Kernel 2.6.35.6-45.fc14.x86_64 on an x86_64= (/dev/hvc0)

fedora14 login: root
Password:
Last login: Mon N= ov 15 13:42:03 on hvc0
[root@fedora14 ~]# mount 192.168.1.9:/home/boris = /mnt/nfs
[root@fedora14 ~]#
----------------------------------------------------------------------=
            = ;            |
&n= bsp;            = ;           vs 
=             &nb= sp;           |
Mount = with the most recent Michael Young kernel - Crashing kernel
------------= ----------------------------------------------------------
Fedora releas= e 14 (Laughlin)
Kernel 2.6.37-0.1.rc1.git8.xendom0.fc14.x86_64 on an x86= _64 (/dev/hvc0)

fedora14 login: root
Password: [   25.8= 25048] eth0: no IPv6 routers present

Last login: Mon Nov 15 13:48:31= on hvc0
[root@fedora14 ~]# mount 192.168.1.9:/home/boris /mnt/nfs
[   44.240979] FS-Cache: Loaded
[   44.= 275659] FS-Cache: Netfs 'nfs' registered for caching
[root@fedora14 ~]# =
-----------------------------------------------------------------------= -------
Boris.

--- On Mon, 11/15/10, Boris Derzhavets <b= derzhavets@yahoo.com> wrote:

Fro= m: Boris Derzhavets <bderzhavets@yahoo.com>
Subject: Re: [Xen-deve= l] Re: 2.6.37-rc1 mainline domU - BUG: unable to handle kernel paging reque= st
To: "Sander Eikelenboom" <linux@eikelenboom.it>, "Bruce Edge" &= lt;bruce.edge@gmail.com>
Cc: "Jeremy Fitzhardinge" <jeremy@goop.or= g>, xen-devel@lists.xensource.com, "Konrad Rzeszutek Wilk" <konrad.wi= lk@oracle.com>
Date: Monday, November 15, 2010, 3:06 AM

S= tack trace on f14 when working with NFS mount

[  218.984818] --= ----------[ cut here ]------------
[  218.984834] kernel BUG at mm/= mmap.c:2399!
[  218.984844] invalid opcode: 0000 [#1] SMP
[&nbs= p; 218.984857] last sysfs file: /sys/devices/system/cpu/cpu1/cache/index2/s= hared_cpu_map
[  218.984872] CPU 1
[  218.984879] Modules = linked in: nfs fscache deflate zlib_deflate ctr camellia cast5 rmd160 crypt= o_null ccm serpent blowfish twofish_generic twofish_x86_64 twofish_common e= cb xcbc cbc sha256_generic sha512_generic des_generic cryptd aes_x86_64 aes= _generic ah6 ah4 esp6 esp4 xfrm4_mode_beet xfrm4_tunnel tunnel4 xfrm4_mode_= tunnel xfrm4_mode_transport xfrm6_mode_transport xfrm6_mode_ro xfrm6_mode_b= eet xfrm6_mode_tunnel ipcomp ipcomp6 xfrm_ipcomp xfrm6_tunnel tunnel6 af_ke= y nfsd lockd nfs_acl auth_rpcgss exportfs=0A sunrpc ip6t_REJECT nf_conntrac= k_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables ipv6 uinput xen_netfront m= icrocode xen_blkfront [last unloaded: scsi_wait_scan]
[  218.985011= ]
[  218.985011] Pid: 1566, comm: ls Not tainted 2.6.37-0.1.rc1.gi= t8.xendom0.fc14.x86_64 #1 /
[  218.985011] RIP: e030:[<ffffffff8= 110ada1>]  [<ffffffff8110ada1>] exit_mmap+0x10c/0x119
[&nb= sp; 218.985011] RSP: e02b:ffff8800774a9e18  EFLAGS: 00010202
[ = ; 218.985011] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 002000000000= 0000
[  218.985011] RDX: 0000000000100004 RSI: ffff8800770ea1b8 RDI= : ffffea0001a00230
[  218.985011] RBP: ffff8800774a9e48 R08: ffff88= 007d045108 R09: 000000000000005a
[  218.985011] R10: ffffffff810075= 0f R11: ffffea000182b7b0 R12: ffff880077dc6300
[  218.985011] R13: = ffff88007fa1b1e0 R14: ffff880077dc6368 R15: 0000000000000001
[  218= .985011] FS: =0A 00007f4a38dd17c0(0000) GS:ffff88007fa0d000(0000) knlG= S:0000000000000000
[  218.985011] CS:  e033 DS: 0000 ES: 0000 = CR0: 000000008005003b
[  218.985011] CR2: 00007f4a380a1940 CR3: 000= 0000001a03000 CR4: 0000000000002660
[  218.985011] DR0: 00000000000= 00000 DR1: 0000000000000000 DR2: 0000000000000000
[  218.985011] DR= 3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[  = 218.985011] Process ls (pid: 1566, threadinfo ffff8800774a8000, task ffff88= 0003ca47c0)
[  218.985011] Stack:
[  218.985011]  0000= 00000000006b ffff88007fa1b1e0 ffff8800774a9e38 ffff880077dc6300
[  = 218.985011]  ffff880077dc6440 ffff880003ca4db0 ffff8800774a9e68 ffffff= ff810505fc
[  218.985011]  ffff880003ca47c0 ffff880077dc6300 f= fff8800774a9eb8 ffffffff81056747
[  218.985011] Call Trace:
[&nb= sp; 218.985011]  [<ffffffff810505fc>] mmput+0x65/0xd8
[ = =0A 218.985011]  [<ffffffff81056747>] exit_mm+0x13e/0x14b
[&n= bsp; 218.985011]  [<ffffffff81056976>] do_exit+0x222/0x7c6
[&= nbsp; 218.985011]  [<ffffffff8100750f>] ? xen_restore_fl_direct_= end+0x0/0x1
[  218.985011]  [<ffffffff8107ea7c>] ? arch_= local_irq_restore+0xb/0xd
[  218.985011]  [<ffffffff814b394= 9>] ? lockdep_sys_exit_thunk+0x35/0x67
[  218.985011]  [<= ;ffffffff810571b0>] do_group_exit+0x88/0xb6
[  218.985011] = [<ffffffff810571f5>] sys_exit_group+0x17/0x1b
[  218.985011]=   [<ffffffff8100acf2>] system_call_fastpath+0x16/0x1b
[ = 218.985011] Code: 8d 7d 18 e8 c3 8a 00 00 41 c7 45 08 00 00 00 00 48 89 df= e8 0d e9 ff ff 48 85 c0 48 89 c3 75 f0 49 83 bc 24 98 01 00 00 00 74 02 &l= t;0f> 0b 48 83 c4 18 5b 41 5c 41 5d c9 c3 55 48 89 e5 41 54 53 48
[&= nbsp; 218.985011] RIP  [<ffffffff8110ada1>]=0A exit_mmap+0x10c/0= x119
[  218.985011]  RSP <ffff8800774a9e18>
[  2= 18.985011] ---[ end trace 99b09fa378e85262 ]---
[  218.985011] Fixi= ng recursive fault but reboot is needed!

Message from syslogd@fedora= 14 at Nov 15 11:03:20 ...
 kernel:[  218.984818] ------------[= cut here ]------------

Message from syslogd@fedora14 at Nov 15 11:0= 3:20 ...
 kernel:[  218.984844] invalid opcode: 0000 [#1] SMP =

Message from syslogd@fedora14 at Nov 15 11:03:20 ...
 kerne= l:[  218.984857] last sysfs file: /sys/devices/system/cpu/cpu1/cache/i= ndex2/shared_cpu_map

Message from syslogd@fedora14 at Nov 15 11:03:2= 0 ...
 kernel:[  218.985011] Stack:

Message from syslog= d@fedora14 at Nov 15 11:03:20 ...
 kernel:[  218.985011] Call = Trace:

Message from syslogd@fedora14 at Nov 15 11:03:20 ...
 = ;kernel:[  218.985011] Code: 8d 7d 18 e8 c3=0A 8a 00 00 41 c7 45 08 00= 00 00 00 48 89 df e8 0d e9 ff ff 48 85 c0 48 89 c3 75 f0 49 83 bc 24 98 01= 00 00 00 74 02 <0f> 0b 48 83 c4 18 5b 41 5c 41 5d c9 c3 55 48 89 e5 = 41 54 53 48
[  259.093423] BUG: unable to handle kernel paging req= uest at ffff880077d352a8
[  259.093441] IP: [<ffffffff81037648&g= t;] ptep_set_access_flags+0x2b/0x51
[  259.093456] PGD 1a04067 PUD = 59c9067 PMD 5b88067 PTE 8010000077d35065
[  259.093472] Oops: 0003 = [#2] SMP
[  259.093481] last sysfs file: /sys/devices/system/cpu/c= pu1/cache/index2/shared_cpu_map
[  259.093493] CPU 1
[  25= 9.093498] Modules linked in: nfs fscache deflate zlib_deflate ctr camellia = cast5 rmd160 crypto_null ccm serpent blowfish twofish_generic twofish_x86_6= 4 twofish_common ecb xcbc cbc sha256_generic sha512_generic des_generic cry= ptd aes_x86_64 aes_generic ah6 ah4 esp6 esp4 xfrm4_mode_beet xfrm4_tunnel t= unnel4 xfrm4_mode_tunnel=0A xfrm4_mode_transport xfrm6_mode_transport xfrm6= _mode_ro xfrm6_mode_beet xfrm6_mode_tunnel ipcomp ipcomp6 xfrm_ipcomp xfrm6= _tunnel tunnel6 af_key nfsd lockd nfs_acl auth_rpcgss exportfs sunrpc ip6t_= REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables ipv6 uin= put xen_netfront microcode xen_blkfront [last unloaded: scsi_wait_scan]
= [  259.093652]
[  259.093658] Pid: 1567, comm: abrtd Tainted:= G      D     2.6.37-0.1.rc1.g= it8.xendom0.fc14.x86_64 #1 /
[  259.093669] RIP: e030:[<ffffffff= 81037648>]  [<ffffffff81037648>] ptep_set_access_flags+0x2b/0= x51
[  259.093683] RSP: e02b:ffff8800770e7bf8  EFLAGS: 0001020= 2
[  259.093690] RAX: 80000001bf75f101 RBX: ffff880077521400 RCX: 8= 0000001bf75f167
[  259.093699] RDX: ffff880077d352a8 RSI: 00007fb9b= 9255ad0 RDI: ffff880077521400
[  259.093708] RBP: ffff8800770e7c28 = R08:=0A 0000000000000001 R09: 1580000000000000
[  259.093717] R10: = ffffffff8100750f R11: ffff880077dc5800 R12: 00007fb9b9255ad0
[  259= .093726] R13: 0000000000000001 R14: ffff880003f2f9f8 R15: ffff880077d352a8<= br>[  259.093737] FS:  00007fb9b9255800(0000) GS:ffff88007fa0d000= (0000) knlGS:0000000000000000
[  259.093747] CS:  e033 DS: 000= 0 ES: 0000 CR0: 000000008005003b
[  259.093755] CR2: ffff880077d352= a8 CR3: 00000000043c8000 CR4: 0000000000002660
[  259.093764] DR0: = 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  259= .093773] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400<= br>[  259.093783] Process abrtd (pid: 1567, threadinfo ffff8800770e600= 0, task ffff880003d2c7c0)
[  259.093800] Stack:
[  259.0938= 07]  ffffea00018382b0 0000000000000000 0000000000000034 00000000000000= 00
[  259.093829]  ffff880077521400 0000000000000000=0A ffff88= 00770e7cb8 ffffffff81104a57
[  259.093851]  ffffffff810050a3 f= fffffff00000001 ffff880004307e48 ffff8800770e7ca8
[  259.093873] Ca= ll Trace:
[  259.093885]  [<ffffffff81104a57>] do_wp_pag= e+0x241/0x53d
[  259.093899]  [<ffffffff810050a3>] ? xen= _pte_val+0x6a/0x6c
[  259.093911]  [<ffffffff81004635>] = ? __raw_callee_save_xen_pte_val+0x11/0x1e
[  259.093926]  [<= ;ffffffff8100750f>] ? xen_restore_fl_direct_end+0x0/0x1
[  259.0= 93941]  [<ffffffff81106491>] ? handle_mm_fault+0x6ea/0x7af
[&= nbsp; 259.093954]  [<ffffffff811064e2>] handle_mm_fault+0x73b/0x= 7af
[  259.093969]  [<ffffffff81073597>] ? down_read_try= lock+0x44/0x4e
[  259.093983]  [<ffffffff814b7aa4>] do_p= age_fault+0x363/0x385
[  259.093996]  [<ffffffff81006f59>= ;] ? xen_force_evtchn_callback+0xd/0xf
[ =0A 259.094011]  [<= ;ffffffff81007522>] ? check_events+0x12/0x20
[  259.094025] = ; [<ffffffff814b3912>] ? trace_hardirqs_off_thunk+0x3a/0x3c
[ = ; 259.094039]  [<ffffffff814b4ad5>] page_fault+0x25/0x30
[&nb= sp; 259.094053]  [<ffffffff8125403d>] ? __put_user_4+0x1d/0x30[  259.094066]  [<ffffffff8104bf66>] ? schedule_tail+0x61= /0x65
[  259.094079]  [<ffffffff8100abf3>] ret_from_fork= +0x13/0x80
[  259.094089] Code: 55 48 89 e5 41 55 41 54 53 48 83 ec= 18 0f 1f 44 00 00 48 39 0a 48 89 fb 49 89 f4 0f 95 c0 45 85 c0 44 0f b6 e8= 74 1c 84 c0 74 18 <48> 89 0a 48 8b 3f 0f 1f 80 00 00 00 00 4c 89 e6 = 48 89 df e8 bb
[  259.094149] RIP  [<ffffffff81037648>]= ptep_set_access_flags+0x2b/0x51
[  259.094149]  RSP <ffff8= 800770e7bf8>
[  259.094149] CR2: ffff880077d352a8
[  259= .094149] ---[ end trace 99b09fa378e85263=0A ]---

Message from syslog= d@fedora14 at Nov 15 11:04:00 ...
 kernel:[  259.093472] Oops:= 0003 [#2] SMP

Message from syslogd@fedora14 at Nov 15 11:04:00 ...=
 kernel:[  259.093481] last sysfs file: /sys/devices/system/c= pu/cpu1/cache/index2/shared_cpu_map

Message from syslogd@fedora14 at= Nov 15 11:04:00 ...
 kernel:[  259.093800] Stack:

Mess= age from syslogd@fedora14 at Nov 15 11:04:00 ...
 kernel:[  25= 9.093873] Call Trace:

Message from syslogd@fedora14 at Nov 15 11:04:= 00 ...
 kernel:[  259.094089] Code: 55 48 89 e5 41 55 41 54 53= 48 83 ec 18 0f 1f 44 00 00 48 39 0a 48 89 fb 49 89 f4 0f 95 c0 45 85 c0 44= 0f b6 e8 74 1c 84 c0 74 18 <48> 89 0a 48 8b 3f 0f 1f 80 00 00 00 00 = 4c 89 e6 48 89 df e8 bb

Message from syslogd@fedora14 at Nov 15 11:= 04:00 ...
 kernel:[  259.094149] CR2: ffff880077d352a8

=
--- On Sun, 11/14/10, Bruce=0A Edge <bruce.edge@gmail.com><= /i> wrote:

From: Bruce Edge <bruce.e= dge@gmail.com>
Subject: Re: [Xen-devel] Re: 2.6.37-rc1 mainline domU = - BUG: unable to handle kernel paging request
To: "Sander Eikelenboom" &= lt;linux@eikelenboom.it>
Cc: "Boris Derzhavets" <bderzhavets@yahoo= .com>, xen-devel@lists.xensource.com, "Jeremy Fitzhardinge" <jeremy@g= oop.org>, "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
Date= : Sunday, November 14, 2010, 4:35 PM

On Sun, Nov 14, 2010 at 8:56 AM, Sander Eikelenboom
<linux@eikelenboom.it> wrote:
> Hmmm have you tried d= o do a lot of I/O with something else as NFS ?
> That would perhaps p= inpoint it to NFS doing something not completely=0A compatible with Xen.
I have my own suspicions regarding the more recent NFS clients. Post10.04 Ubuntu variants do not tolerate large NFS transfers even withoutXen. Any more than a few 100 Megs and you start getting 'task blocked
f= or more than 120 sec..." messages along with stack traces showing
part o= f the NFS call stack.
Perhaps a parallel effort could be to test the 2.6= .37-rc1 kernel with
something other than NFS for remote filesystems. I'l= l see if I get the
same problems with glusterfs.

-Bruce

&g= t;
> I'm not using NFS (I still use file: based guests, and i use glu= sterfs (fuse based userspace cluster fs) to share diskspace to domU's via e= thernet).
> I tried NFS in the past, but had some troubles setting it= up, and even more problems with disconnects.
>
> I haven't see= n any "unable to handle page request" problems with my mix of guest kernels= , which includes some 2.6.37-rc1=0A kernels.
>
> --
>
= > Sander
>
>
>
>
>
> Sunday, Novembe= r 14, 2010, 5:37:59 PM, you wrote:
>
>> I've tested F14 DomU= (kernel vmlinuz-2.6.37-0.1.rc1.git8.xendom0.fc14.x86_64) as NFS client and= Xen 4.0.1 F14 Dom0 (kernel vmlinuz-2.6.32.25-172.xendom0.fc14.x86_64) as N= FS server . Copied 700 MB ISO images from NFS folder at Dom0 to DomU and sc= p'ed them back to Dom0. During about 30 - 40 min DomU ran pretty stable , r= egardless kernel crash as "unable to handle page request" was reported once= by F14 DomU, but it didn't actually crash DomU. Same excersises with repla= cement F14 by Ubuntu 10.04 Server results DomU crash in about several minut= es. Dom0's instances dual boot on same development box ( Q9500,ASUS P5Q3,8G= B)
>
>> Boris.
>
>> --- On Fri, 11/12/10, Kon= rad Rzeszutek Wilk <konrad.wilk@oracle.com> w= rote:
>
>> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com= >
>> Subject: Re: [Xen-devel] Re: 2.6.37-rc1 mainline domU = - BUG: unable to handle kernel paging request
>> To: "Sander Eikel= enboom" <linux@eikelenboom.it>
>> Cc= : "Boris Derzhavets" <bderzhavets@yahoo.com>,= xen-devel@lists.xensource.com, "Bruce Edge" <bruce.edge@gmail.com>, "Jeremy Fitzhardinge"=0A &l= t;jeremy@goop.org>
>> Date: Friday, Nov= ember 12, 2010, 12:01 PM
>
>> On Fri, Nov 12, 2010 at 05:27:= 43PM +0100, Sander Eikelenboom wrote:
>>> Hi Bruce,
>>= >
>>> Perhaps handpick some kernels before and after the pul= ls of the xen patches (pv-on-hvm etc) to begin with ?
>>> When = you let git choose, especially with rc-1 kernels, you will end up with kern= els in between patch series, resulting in panics.
>
>> Well,= just the bare-bone boot of PV guests with nothing fancy ought to work.
= >
>> But that is the theory and ..
>>> > The git= bisecting is slow going. I've never tried that before and I'm a git
>= ;>> > rookie.
>>> > I picked 2.6.36 - 2.6.37-rc1 as= the bisect range and my first 2 bisects all
>>> > panic at = boot so I'm=0A obviously doing something wrong.
>>> > I'll R= TFM a bit more and keep at it.
>
>> .. as Bruce experiences = this is not the case. Hmm..
>
>> ___________________________= ____________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http:/= /lists.xensource.com/xen-devel
>
>
>
>>
&= gt;
>
>
> --
> Best regards,
>  Sander &= nbsp;                    =      mailto:linux@eikelenboom.it>
>

_______________________________________________
Xen= -devel mailing list
Xen-devel@lists.xensource.com

http://lists.xensource.com/x= en-devel

=0A=0A =

-----Inline Attachment Follows-----

_______________________________________________
Xen-devel mailing l= ist
Xen-devel@lists.xensource.comhttp:/= /lists.xensource.com/xen-devel
=
=0A=0A --0-635643495-1289819110=:30233-- --===============1158123054== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel --===============1158123054==--