All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: "qi.fuli@fujitsu.com" <qi.fuli@fujitsu.com>
Cc: "virtio-fs@redhat.com" <virtio-fs@redhat.com>,
	"misono.tomohiro@fujitsu.com" <misono.tomohiro@fujitsu.com>,
	"masayoshi.mizuma@fujitsu.com" <masayoshi.mizuma@fujitsu.com>
Subject: Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
Date: Fri, 11 Oct 2019 10:21:45 +0100	[thread overview]
Message-ID: <20191011092145.GE3354@work-vm> (raw)
In-Reply-To: <27035e4a-bd12-e5d8-30d0-0df45e75457c@jp.fujitsu.com>

* qi.fuli@fujitsu.com (qi.fuli@fujitsu.com) wrote:
> Hi,
> 
> Thank you for your comments.
> 
> On 10/10/19 1:51 AM, Dr. David Alan Gilbert wrote:
> > * Dr. David Alan Gilbert (dgilbert@redhat.com) wrote:
> >> * qi.fuli@fujitsu.com (qi.fuli@fujitsu.com) wrote:
> >>> Hello,
> >>>
> >>
> >> Hi,
> > 
> > In addition to the other questions, I'd appreciate
> > if you could explain your xfstests setup and the way you told
> > it about the virtio mounts.
> 
> In order to run the tests on virtio-fs, I did the following changes in 
> xfstest[1].
> 
> diff --git a/check b/check

Thanks; it would be great if you could send these changes upstream
to xfstest;  I know there's at least one other person who has written
xfstest changes.

I'm hitting some problems getting it to run; it seems to be hitting
a reliable NFS kernel client OOPS on the fedora kernels I'm using
on the host on generic/013

I have repeated the access time error you're seeing for generic/003.

Dave

> The command to run xfstests for virtio-fs is:
> $ ./check -virtiofs generic/???
> 
> [1] https://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
> 
> > 
> > Dave
> > 
> >>
> >>> We have run the generic tests of xfstest for virtio-fs[1] on aarch64[2],
> >>> here we selected some tests that did not run or failed to run,
> >>> and we categorized them, basing on the reasons in our understanding.
> >>
> >> Thanks for sharing your test results.
> >>
> >>>    * Category 1: generic/003, generic/192
> >>>      Error: access time error
> >>>      Reason: file_accessed() not run
> >>>    * Category 2: generic/089, generic/478, generic/484, generic/504
> >>>      Error: lock error
> >>>    * Category 3: generic/426, generic/467, generic/477
> >>>      Error: open_by_handle error
> >>>    * Category 4: generic/551
> >>>      Error: kvm panic
> >>
> >> I'm not expecting a KVM panic; can you give us a copy of the
> >> oops/panic/backtrace you're seeing?
> 
> Sorry, In my recent tests, the KVM panic didn't happen, but the OOM 
> event occurred. I will expand the memory and test it again, please give 
> me a little more time.
> 
> >>
> >>>    * Category 5: generic/011, generic/013
> >>>      Error: cannot remove file
> >>>      Reason: NFS backend
> >>>    * Category 6: generic/035
> >>>      Error: nlink is 1, should be 0
> >>>    * Category 7: generic/125, generic/193, generic/314
> >>>      Error: open/chown/mkdir permission error
> >>>    * Category 8: generic/469
> >>>      Error: fallocate keep_size is needed
> >>>      Reason: NFS4.0 backend
> >>>    * Category 9: generic/323
> >>>      Error: system hang
> >>>      Reason: fd is close before AIO finished
> >>
> >> When you 'say system hang' - you mean the whole guest hanging?
> >> Did the virtiofsd process hang or crash?
> 
> No, not the whole guest, only the test process hanging. The virtiofsd 
> process keeps working. Here are some debug messages:
> 
> [ 7740.126845] INFO: task aio-last-ref-he:3361 blocked for more than 122 
> seconds.
> [ 7740.128884]       Not tainted 5.4.0-rc1-aarch64-5.4-rc1 #1
> [ 7740.130364] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
> disables this message.
> [ 7740.132472] aio-last-ref-he D    0  3361   3143 0x00000220
> [ 7740.133954] Call trace:
> [ 7740.134627]  __switch_to+0x98/0x1e0
> [ 7740.135579]  __schedule+0x29c/0x688
> [ 7740.136527]  schedule+0x38/0xb8
> [ 7740.137615]  schedule_timeout+0x258/0x358
> [ 7740.139160]  wait_for_completion+0x174/0x400
> [ 7740.140322]  exit_aio+0x118/0x6c0
> [ 7740.141226]  mmput+0x6c/0x1c0
> [ 7740.142036]  do_exit+0x29c/0xa58
> [ 7740.142915]  do_group_exit+0x48/0xb0
> [ 7740.143888]  get_signal+0x168/0x8b0
> [ 7740.144836]  do_notify_resume+0x174/0x3d8
> [ 7740.145925]  work_pending+0x8/0x10
> [ 7863.006847] INFO: task aio-last-ref-he:3361 blocked for more than 245 
> seconds.
> [ 7863.008876]       Not tainted 5.4.0-rc1-aarch64-5.4-rc1 #1
> 
> Thanks,
> QI Fuli
> 
> >>>
> >>> We would like to know if virtio-fs does not support these tests in
> >>> the specification or they are bugs that need to be fixed.
> >>> It would be very appreciated if anyone could give some comments.
> >>
> >> It'll take us a few days to go through and figure that out; we'll
> >> try and replicate it.
> >>
> >> Dave
> >>
> >>>
> >>> [1] qemu: https://gitlab.com/virtio-fs/qemu/tree/virtio-fs-dev
> >>>       start qemu script:
> >>>       $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu1 -o
> >>> source=/root/virtio-fs/test1/ -o cache=always -o xattr -o flock -d &
> >>>       $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu2 -o
> >>> source=/root/virtio-fs/test2/ -o cache=always -o xattr -o flock -d &
> >>>       $QEMU -M virt,accel=kvm,gic_version=3 \
> >>>           -cpu host \
> >>>           -smp 8 \
> >>>           -m 8192\
> >>>           -nographic \
> >>>           -serial mon:stdio \
> >>>           -netdev tap,id=net0 -device
> >>> virtio-net-pci,netdev=net0,id=net0,mac=XX:XX:XX:XX:XX:XX \
> >>>           -object
> >>> memory-backend-file,id=mem,size=8G,mem-path=/dev/shm,share=on \
> >>>           -numa node,memdev=mem \
> >>>           -drive
> >>> file=/root/virtio-fs/AAVMF/AAVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on
> >>> \
> >>>           -drive file=$VARS,if=pflash,format=raw,unit=1 \
> >>>           -chardev socket,id=char1,path=/tmp/vhostqemu1 \
> >>>           -device
> >>> vhost-user-fs-pci,queue-size=1024,chardev=char1,tag=myfs1,cache-size=0 \
> >>>           -chardev socket,id=char2,path=/tmp/vhostqemu2 \
> >>>           -device
> >>> vhost-user-fs-pci,queue-size=1024,chardev=char2,tag=myfs2,cache-size=0 \
> >>>           -drive if=virtio,file=/var/lib/libvirt/images/guest.img
> >>>
> >>> [2] host kernel: 4.18.0-80.4.2.el8_0.aarch64
> >>>       guest kernel: 5.4-rc1
> >>>       Arch: Arm64
> >>>       backend: NFS 4.0
> >>>
> >>> Thanks,
> >>> QI Fuli
> >>>
> >>> _______________________________________________
> >>> Virtio-fs mailing list
> >>> Virtio-fs@redhat.com
> >>> https://www.redhat.com/mailman/listinfo/virtio-fs
> >> --
> >> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >>
> >> _______________________________________________
> >> Virtio-fs mailing list
> >> Virtio-fs@redhat.com
> >> https://www.redhat.com/mailman/listinfo/virtio-fs
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> > 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


  reply	other threads:[~2019-10-11  9:21 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-07 12:11 [Virtio-fs] xfstest results for virtio-fs on aarch64 qi.fuli
2019-10-07 14:34 ` Dr. David Alan Gilbert
2019-10-09 16:51   ` Dr. David Alan Gilbert
2019-10-10  9:57     ` qi.fuli
2019-10-11  9:21       ` Dr. David Alan Gilbert [this message]
2019-10-11 18:45       ` Chirantan Ekbote
2019-10-11 19:59         ` Vivek Goyal
2019-10-11 20:13           ` Chirantan Ekbote
2019-10-11 20:36             ` Vivek Goyal
2019-10-14  9:11               ` Stefan Hajnoczi
2019-10-15 14:58                 ` Chirantan Ekbote
2019-10-15 15:57                   ` Dr. David Alan Gilbert
2019-10-15 16:11                     ` Chirantan Ekbote
2019-10-15 16:26                       ` Dr. David Alan Gilbert
2019-10-15 17:28                         ` Boeuf, Sebastien
2019-10-15 18:21                           ` Chirantan Ekbote
2019-10-16 14:14                             ` Stefan Hajnoczi
2019-10-16 16:14                               ` Boeuf, Sebastien
2019-10-16 18:38                                 ` Stefan Hajnoczi
2019-10-17  5:19                                   ` Boeuf, Sebastien
2019-11-01 22:26                                   ` Boeuf, Sebastien
2019-11-12  9:45                                     ` Stefan Hajnoczi
     [not found]                                       ` <5238b860a8d544e59c9a827fbc26418d53482ccf.camel@intel.com>
2019-11-13 11:39                                         ` Stefan Hajnoczi
2019-11-20 16:53                                           ` Dr. David Alan Gilbert
2019-10-16 16:11                             ` Boeuf, Sebastien
2019-10-16 18:37                               ` Stefan Hajnoczi
2019-10-16 14:09                   ` Stefan Hajnoczi
2019-10-16 19:36                     ` Chirantan Ekbote
2019-10-16 19:44                       ` Vivek Goyal
2019-10-17  9:23                       ` Stefan Hajnoczi
2019-11-06 12:13 ` Dr. David Alan Gilbert
2019-11-07  8:03   ` misono.tomohiro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191011092145.GE3354@work-vm \
    --to=dgilbert@redhat.com \
    --cc=masayoshi.mizuma@fujitsu.com \
    --cc=misono.tomohiro@fujitsu.com \
    --cc=qi.fuli@fujitsu.com \
    --cc=virtio-fs@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.