All of lore.kernel.org
 help / color / mirror / Atom feed
* [Virtio-fs]  xfstest results for virtio-fs on aarch64
@ 2019-10-07 12:11 qi.fuli
  2019-10-07 14:34 ` Dr. David Alan Gilbert
  2019-11-06 12:13 ` Dr. David Alan Gilbert
  0 siblings, 2 replies; 32+ messages in thread
From: qi.fuli @ 2019-10-07 12:11 UTC (permalink / raw)
  To: virtio-fs; +Cc: misono.tomohiro, masayoshi.mizuma

Hello,

We have run the generic tests of xfstest for virtio-fs[1] on aarch64[2],
here we selected some tests that did not run or failed to run,
and we categorized them, basing on the reasons in our understanding.

  * Category 1: generic/003, generic/192
    Error: access time error
    Reason: file_accessed() not run
  * Category 2: generic/089, generic/478, generic/484, generic/504
    Error: lock error
  * Category 3: generic/426, generic/467, generic/477
    Error: open_by_handle error
  * Category 4: generic/551
    Error: kvm panic
  * Category 5: generic/011, generic/013
    Error: cannot remove file
    Reason: NFS backend
  * Category 6: generic/035
    Error: nlink is 1, should be 0
  * Category 7: generic/125, generic/193, generic/314
    Error: open/chown/mkdir permission error
  * Category 8: generic/469
    Error: fallocate keep_size is needed
    Reason: NFS4.0 backend
  * Category 9: generic/323
    Error: system hang
    Reason: fd is close before AIO finished

We would like to know if virtio-fs does not support these tests in
the specification or they are bugs that need to be fixed.
It would be very appreciated if anyone could give some comments.

[1] qemu: https://gitlab.com/virtio-fs/qemu/tree/virtio-fs-dev
     start qemu script:
     $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu1 -o 
source=/root/virtio-fs/test1/ -o cache=always -o xattr -o flock -d &
     $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu2 -o 
source=/root/virtio-fs/test2/ -o cache=always -o xattr -o flock -d &
     $QEMU -M virt,accel=kvm,gic_version=3 \
         -cpu host \
         -smp 8 \
         -m 8192\
         -nographic \
         -serial mon:stdio \
         -netdev tap,id=net0 -device 
virtio-net-pci,netdev=net0,id=net0,mac=XX:XX:XX:XX:XX:XX \
         -object 
memory-backend-file,id=mem,size=8G,mem-path=/dev/shm,share=on \
         -numa node,memdev=mem \
         -drive 
file=/root/virtio-fs/AAVMF/AAVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on 
\
         -drive file=$VARS,if=pflash,format=raw,unit=1 \
         -chardev socket,id=char1,path=/tmp/vhostqemu1 \
         -device 
vhost-user-fs-pci,queue-size=1024,chardev=char1,tag=myfs1,cache-size=0 \
         -chardev socket,id=char2,path=/tmp/vhostqemu2 \
         -device 
vhost-user-fs-pci,queue-size=1024,chardev=char2,tag=myfs2,cache-size=0 \
         -drive if=virtio,file=/var/lib/libvirt/images/guest.img

[2] host kernel: 4.18.0-80.4.2.el8_0.aarch64
     guest kernel: 5.4-rc1
     Arch: Arm64
     backend: NFS 4.0

Thanks,
QI Fuli


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-07 12:11 [Virtio-fs] xfstest results for virtio-fs on aarch64 qi.fuli
@ 2019-10-07 14:34 ` Dr. David Alan Gilbert
  2019-10-09 16:51   ` Dr. David Alan Gilbert
  2019-11-06 12:13 ` Dr. David Alan Gilbert
  1 sibling, 1 reply; 32+ messages in thread
From: Dr. David Alan Gilbert @ 2019-10-07 14:34 UTC (permalink / raw)
  To: qi.fuli; +Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma

* qi.fuli@fujitsu.com (qi.fuli@fujitsu.com) wrote:
> Hello,
> 

Hi,

> We have run the generic tests of xfstest for virtio-fs[1] on aarch64[2],
> here we selected some tests that did not run or failed to run,
> and we categorized them, basing on the reasons in our understanding.

Thanks for sharing your test results.

>   * Category 1: generic/003, generic/192
>     Error: access time error
>     Reason: file_accessed() not run
>   * Category 2: generic/089, generic/478, generic/484, generic/504
>     Error: lock error
>   * Category 3: generic/426, generic/467, generic/477
>     Error: open_by_handle error
>   * Category 4: generic/551
>     Error: kvm panic

I'm not expecting a KVM panic; can you give us a copy of the 
oops/panic/backtrace you're seeing?

>   * Category 5: generic/011, generic/013
>     Error: cannot remove file
>     Reason: NFS backend
>   * Category 6: generic/035
>     Error: nlink is 1, should be 0
>   * Category 7: generic/125, generic/193, generic/314
>     Error: open/chown/mkdir permission error
>   * Category 8: generic/469
>     Error: fallocate keep_size is needed
>     Reason: NFS4.0 backend
>   * Category 9: generic/323
>     Error: system hang
>     Reason: fd is close before AIO finished

When you 'say system hang' - you mean the whole guest hanging?
Did the virtiofsd process hang or crash?

> 
> We would like to know if virtio-fs does not support these tests in
> the specification or they are bugs that need to be fixed.
> It would be very appreciated if anyone could give some comments.

It'll take us a few days to go through and figure that out; we'll
try and replicate it.

Dave

> 
> [1] qemu: https://gitlab.com/virtio-fs/qemu/tree/virtio-fs-dev
>      start qemu script:
>      $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu1 -o 
> source=/root/virtio-fs/test1/ -o cache=always -o xattr -o flock -d &
>      $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu2 -o 
> source=/root/virtio-fs/test2/ -o cache=always -o xattr -o flock -d &
>      $QEMU -M virt,accel=kvm,gic_version=3 \
>          -cpu host \
>          -smp 8 \
>          -m 8192\
>          -nographic \
>          -serial mon:stdio \
>          -netdev tap,id=net0 -device 
> virtio-net-pci,netdev=net0,id=net0,mac=XX:XX:XX:XX:XX:XX \
>          -object 
> memory-backend-file,id=mem,size=8G,mem-path=/dev/shm,share=on \
>          -numa node,memdev=mem \
>          -drive 
> file=/root/virtio-fs/AAVMF/AAVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on 
> \
>          -drive file=$VARS,if=pflash,format=raw,unit=1 \
>          -chardev socket,id=char1,path=/tmp/vhostqemu1 \
>          -device 
> vhost-user-fs-pci,queue-size=1024,chardev=char1,tag=myfs1,cache-size=0 \
>          -chardev socket,id=char2,path=/tmp/vhostqemu2 \
>          -device 
> vhost-user-fs-pci,queue-size=1024,chardev=char2,tag=myfs2,cache-size=0 \
>          -drive if=virtio,file=/var/lib/libvirt/images/guest.img
> 
> [2] host kernel: 4.18.0-80.4.2.el8_0.aarch64
>      guest kernel: 5.4-rc1
>      Arch: Arm64
>      backend: NFS 4.0
> 
> Thanks,
> QI Fuli
> 
> _______________________________________________
> Virtio-fs mailing list
> Virtio-fs@redhat.com
> https://www.redhat.com/mailman/listinfo/virtio-fs
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-07 14:34 ` Dr. David Alan Gilbert
@ 2019-10-09 16:51   ` Dr. David Alan Gilbert
  2019-10-10  9:57     ` qi.fuli
  0 siblings, 1 reply; 32+ messages in thread
From: Dr. David Alan Gilbert @ 2019-10-09 16:51 UTC (permalink / raw)
  To: qi.fuli; +Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma

* Dr. David Alan Gilbert (dgilbert@redhat.com) wrote:
> * qi.fuli@fujitsu.com (qi.fuli@fujitsu.com) wrote:
> > Hello,
> > 
> 
> Hi,

In addition to the other questions, I'd appreciate
if you could explain your xfstests setup and the way you told
it about the virtio mounts.

Dave

> 
> > We have run the generic tests of xfstest for virtio-fs[1] on aarch64[2],
> > here we selected some tests that did not run or failed to run,
> > and we categorized them, basing on the reasons in our understanding.
> 
> Thanks for sharing your test results.
> 
> >   * Category 1: generic/003, generic/192
> >     Error: access time error
> >     Reason: file_accessed() not run
> >   * Category 2: generic/089, generic/478, generic/484, generic/504
> >     Error: lock error
> >   * Category 3: generic/426, generic/467, generic/477
> >     Error: open_by_handle error
> >   * Category 4: generic/551
> >     Error: kvm panic
> 
> I'm not expecting a KVM panic; can you give us a copy of the 
> oops/panic/backtrace you're seeing?
> 
> >   * Category 5: generic/011, generic/013
> >     Error: cannot remove file
> >     Reason: NFS backend
> >   * Category 6: generic/035
> >     Error: nlink is 1, should be 0
> >   * Category 7: generic/125, generic/193, generic/314
> >     Error: open/chown/mkdir permission error
> >   * Category 8: generic/469
> >     Error: fallocate keep_size is needed
> >     Reason: NFS4.0 backend
> >   * Category 9: generic/323
> >     Error: system hang
> >     Reason: fd is close before AIO finished
> 
> When you 'say system hang' - you mean the whole guest hanging?
> Did the virtiofsd process hang or crash?
> 
> > 
> > We would like to know if virtio-fs does not support these tests in
> > the specification or they are bugs that need to be fixed.
> > It would be very appreciated if anyone could give some comments.
> 
> It'll take us a few days to go through and figure that out; we'll
> try and replicate it.
> 
> Dave
> 
> > 
> > [1] qemu: https://gitlab.com/virtio-fs/qemu/tree/virtio-fs-dev
> >      start qemu script:
> >      $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu1 -o 
> > source=/root/virtio-fs/test1/ -o cache=always -o xattr -o flock -d &
> >      $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu2 -o 
> > source=/root/virtio-fs/test2/ -o cache=always -o xattr -o flock -d &
> >      $QEMU -M virt,accel=kvm,gic_version=3 \
> >          -cpu host \
> >          -smp 8 \
> >          -m 8192\
> >          -nographic \
> >          -serial mon:stdio \
> >          -netdev tap,id=net0 -device 
> > virtio-net-pci,netdev=net0,id=net0,mac=XX:XX:XX:XX:XX:XX \
> >          -object 
> > memory-backend-file,id=mem,size=8G,mem-path=/dev/shm,share=on \
> >          -numa node,memdev=mem \
> >          -drive 
> > file=/root/virtio-fs/AAVMF/AAVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on 
> > \
> >          -drive file=$VARS,if=pflash,format=raw,unit=1 \
> >          -chardev socket,id=char1,path=/tmp/vhostqemu1 \
> >          -device 
> > vhost-user-fs-pci,queue-size=1024,chardev=char1,tag=myfs1,cache-size=0 \
> >          -chardev socket,id=char2,path=/tmp/vhostqemu2 \
> >          -device 
> > vhost-user-fs-pci,queue-size=1024,chardev=char2,tag=myfs2,cache-size=0 \
> >          -drive if=virtio,file=/var/lib/libvirt/images/guest.img
> > 
> > [2] host kernel: 4.18.0-80.4.2.el8_0.aarch64
> >      guest kernel: 5.4-rc1
> >      Arch: Arm64
> >      backend: NFS 4.0
> > 
> > Thanks,
> > QI Fuli
> > 
> > _______________________________________________
> > Virtio-fs mailing list
> > Virtio-fs@redhat.com
> > https://www.redhat.com/mailman/listinfo/virtio-fs
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> 
> _______________________________________________
> Virtio-fs mailing list
> Virtio-fs@redhat.com
> https://www.redhat.com/mailman/listinfo/virtio-fs
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-09 16:51   ` Dr. David Alan Gilbert
@ 2019-10-10  9:57     ` qi.fuli
  2019-10-11  9:21       ` Dr. David Alan Gilbert
  2019-10-11 18:45       ` Chirantan Ekbote
  0 siblings, 2 replies; 32+ messages in thread
From: qi.fuli @ 2019-10-10  9:57 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, qi.fuli
  Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma

Hi,

Thank you for your comments.

On 10/10/19 1:51 AM, Dr. David Alan Gilbert wrote:
> * Dr. David Alan Gilbert (dgilbert@redhat.com) wrote:
>> * qi.fuli@fujitsu.com (qi.fuli@fujitsu.com) wrote:
>>> Hello,
>>>
>>
>> Hi,
> 
> In addition to the other questions, I'd appreciate
> if you could explain your xfstests setup and the way you told
> it about the virtio mounts.

In order to run the tests on virtio-fs, I did the following changes in 
xfstest[1].

diff --git a/check b/check
index c7f1dc5e..2e148e57 100755
--- a/check
+++ b/check
@@ -56,6 +56,7 @@ check options
      -glusterfs         test GlusterFS
      -cifs              test CIFS
      -9p                        test 9p
+    -virtiofs          test virtiofs
      -overlay           test overlay
      -pvfs2             test PVFS2
      -tmpfs             test TMPFS
@@ -268,6 +269,7 @@ while [ $# -gt 0 ]; do
         -glusterfs)     FSTYP=glusterfs ;;
         -cifs)          FSTYP=cifs ;;
         -9p)            FSTYP=9p ;;
+       -virtiofs)      FSTYP=virtiofs ;;
         -overlay)       FSTYP=overlay; export OVERLAY=true ;;
         -pvfs2)         FSTYP=pvfs2 ;;
         -tmpfs)         FSTYP=tmpfs ;;
diff --git a/common/config b/common/config
index 4eda36c7..551fed33 100644
--- a/common/config
+++ b/common/config
@@ -478,7 +478,7 @@ _check_device()
         fi

         case "$FSTYP" in
-       9p|tmpfs)
+       9p|tmpfs|virtiofs)
                 # 9p mount tags are just plain strings, so anything is 
allowed
                 # tmpfs doesn't use mount source, ignore
                 ;;
diff --git a/common/rc b/common/rc
index cfaabf10..3d5c8b23 100644
--- a/common/rc
+++ b/common/rc
@@ -603,6 +603,9 @@ _test_mkfs()
      9p)
         # do nothing for 9p
         ;;
+    virtiofs)
+       # do nothing for virtiofs
+       ;;
      ceph)
         # do nothing for ceph
         ;;
@@ -640,6 +643,9 @@ _mkfs_dev()
      9p)
         # do nothing for 9p
         ;;
+    virtiofs)
+       # do nothing for virtiofs
+       ;;
      overlay)
         # do nothing for overlay
         ;;
@@ -704,7 +710,7 @@ _scratch_mkfs()
         local mkfs_status

         case $FSTYP in
-       nfs*|cifs|ceph|overlay|glusterfs|pvfs2|9p)
+       nfs*|cifs|ceph|overlay|glusterfs|pvfs2|9p|virtiofs)
                 # unable to re-create this fstyp, just remove all files in
                 # $SCRATCH_MNT to avoid EEXIST caused by the leftover files
                 # created in previous runs
@@ -1467,7 +1473,7 @@ _require_scratch_nocheck()
                         _notrun "this test requires a valid \$SCRATCH_MNT"
                 fi
                 ;;
-       9p)
+       9p|virtiofs)
                 if [ -z "$SCRATCH_DEV" ]; then
                         _notrun "this test requires a valid \$SCRATCH_DEV"
                 fi
@@ -1591,7 +1597,7 @@ _require_test()
                         _notrun "this test requires a valid \$TEST_DIR"
                 fi
                 ;;
-       9p)
+       9p|virtiofs)
                 if [ -z "$TEST_DEV" ]; then
                         _notrun "this test requires a valid \$TEST_DEV"
                 fi
@@ -2686,6 +2692,9 @@ _check_test_fs()
      9p)
         # no way to check consistency for 9p
         ;;
+    virtiofs)
+       # no way to check consistency for virtiofs
+       ;;
      ceph)
         # no way to check consistency for CephFS
         ;;
@@ -2744,6 +2753,9 @@ _check_scratch_fs()
      9p)
         # no way to check consistency for 9p
         ;;
+    virtiofs)
+       # no way to check consistency for virtiofs
+       ;;
      ceph)
         # no way to check consistency for CephFS
         ;;

And my local.config is:

# cat ../xfstest-dev/local.config
export TEST_DEV=myfs1
export TEST_DIR=/test1
export SCRATCH_DEV=myfs2
export SCRATCH_MNT=/test2
export MOUNT_OPTIONS=""
export TEST_FS_MOUNT_OPTS=""

The command to run xfstests for virtio-fs is:
$ ./check -virtiofs generic/???

[1] https://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git

> 
> Dave
> 
>>
>>> We have run the generic tests of xfstest for virtio-fs[1] on aarch64[2],
>>> here we selected some tests that did not run or failed to run,
>>> and we categorized them, basing on the reasons in our understanding.
>>
>> Thanks for sharing your test results.
>>
>>>    * Category 1: generic/003, generic/192
>>>      Error: access time error
>>>      Reason: file_accessed() not run
>>>    * Category 2: generic/089, generic/478, generic/484, generic/504
>>>      Error: lock error
>>>    * Category 3: generic/426, generic/467, generic/477
>>>      Error: open_by_handle error
>>>    * Category 4: generic/551
>>>      Error: kvm panic
>>
>> I'm not expecting a KVM panic; can you give us a copy of the
>> oops/panic/backtrace you're seeing?

Sorry, In my recent tests, the KVM panic didn't happen, but the OOM 
event occurred. I will expand the memory and test it again, please give 
me a little more time.

>>
>>>    * Category 5: generic/011, generic/013
>>>      Error: cannot remove file
>>>      Reason: NFS backend
>>>    * Category 6: generic/035
>>>      Error: nlink is 1, should be 0
>>>    * Category 7: generic/125, generic/193, generic/314
>>>      Error: open/chown/mkdir permission error
>>>    * Category 8: generic/469
>>>      Error: fallocate keep_size is needed
>>>      Reason: NFS4.0 backend
>>>    * Category 9: generic/323
>>>      Error: system hang
>>>      Reason: fd is close before AIO finished
>>
>> When you 'say system hang' - you mean the whole guest hanging?
>> Did the virtiofsd process hang or crash?

No, not the whole guest, only the test process hanging. The virtiofsd 
process keeps working. Here are some debug messages:

[ 7740.126845] INFO: task aio-last-ref-he:3361 blocked for more than 122 
seconds.
[ 7740.128884]       Not tainted 5.4.0-rc1-aarch64-5.4-rc1 #1
[ 7740.130364] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
[ 7740.132472] aio-last-ref-he D    0  3361   3143 0x00000220
[ 7740.133954] Call trace:
[ 7740.134627]  __switch_to+0x98/0x1e0
[ 7740.135579]  __schedule+0x29c/0x688
[ 7740.136527]  schedule+0x38/0xb8
[ 7740.137615]  schedule_timeout+0x258/0x358
[ 7740.139160]  wait_for_completion+0x174/0x400
[ 7740.140322]  exit_aio+0x118/0x6c0
[ 7740.141226]  mmput+0x6c/0x1c0
[ 7740.142036]  do_exit+0x29c/0xa58
[ 7740.142915]  do_group_exit+0x48/0xb0
[ 7740.143888]  get_signal+0x168/0x8b0
[ 7740.144836]  do_notify_resume+0x174/0x3d8
[ 7740.145925]  work_pending+0x8/0x10
[ 7863.006847] INFO: task aio-last-ref-he:3361 blocked for more than 245 
seconds.
[ 7863.008876]       Not tainted 5.4.0-rc1-aarch64-5.4-rc1 #1

Thanks,
QI Fuli

>>>
>>> We would like to know if virtio-fs does not support these tests in
>>> the specification or they are bugs that need to be fixed.
>>> It would be very appreciated if anyone could give some comments.
>>
>> It'll take us a few days to go through and figure that out; we'll
>> try and replicate it.
>>
>> Dave
>>
>>>
>>> [1] qemu: https://gitlab.com/virtio-fs/qemu/tree/virtio-fs-dev
>>>       start qemu script:
>>>       $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu1 -o
>>> source=/root/virtio-fs/test1/ -o cache=always -o xattr -o flock -d &
>>>       $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu2 -o
>>> source=/root/virtio-fs/test2/ -o cache=always -o xattr -o flock -d &
>>>       $QEMU -M virt,accel=kvm,gic_version=3 \
>>>           -cpu host \
>>>           -smp 8 \
>>>           -m 8192\
>>>           -nographic \
>>>           -serial mon:stdio \
>>>           -netdev tap,id=net0 -device
>>> virtio-net-pci,netdev=net0,id=net0,mac=XX:XX:XX:XX:XX:XX \
>>>           -object
>>> memory-backend-file,id=mem,size=8G,mem-path=/dev/shm,share=on \
>>>           -numa node,memdev=mem \
>>>           -drive
>>> file=/root/virtio-fs/AAVMF/AAVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on
>>> \
>>>           -drive file=$VARS,if=pflash,format=raw,unit=1 \
>>>           -chardev socket,id=char1,path=/tmp/vhostqemu1 \
>>>           -device
>>> vhost-user-fs-pci,queue-size=1024,chardev=char1,tag=myfs1,cache-size=0 \
>>>           -chardev socket,id=char2,path=/tmp/vhostqemu2 \
>>>           -device
>>> vhost-user-fs-pci,queue-size=1024,chardev=char2,tag=myfs2,cache-size=0 \
>>>           -drive if=virtio,file=/var/lib/libvirt/images/guest.img
>>>
>>> [2] host kernel: 4.18.0-80.4.2.el8_0.aarch64
>>>       guest kernel: 5.4-rc1
>>>       Arch: Arm64
>>>       backend: NFS 4.0
>>>
>>> Thanks,
>>> QI Fuli
>>>
>>> _______________________________________________
>>> Virtio-fs mailing list
>>> Virtio-fs@redhat.com
>>> https://www.redhat.com/mailman/listinfo/virtio-fs
>> --
>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>>
>> _______________________________________________
>> Virtio-fs mailing list
>> Virtio-fs@redhat.com
>> https://www.redhat.com/mailman/listinfo/virtio-fs
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> 


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-10  9:57     ` qi.fuli
@ 2019-10-11  9:21       ` Dr. David Alan Gilbert
  2019-10-11 18:45       ` Chirantan Ekbote
  1 sibling, 0 replies; 32+ messages in thread
From: Dr. David Alan Gilbert @ 2019-10-11  9:21 UTC (permalink / raw)
  To: qi.fuli; +Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma

* qi.fuli@fujitsu.com (qi.fuli@fujitsu.com) wrote:
> Hi,
> 
> Thank you for your comments.
> 
> On 10/10/19 1:51 AM, Dr. David Alan Gilbert wrote:
> > * Dr. David Alan Gilbert (dgilbert@redhat.com) wrote:
> >> * qi.fuli@fujitsu.com (qi.fuli@fujitsu.com) wrote:
> >>> Hello,
> >>>
> >>
> >> Hi,
> > 
> > In addition to the other questions, I'd appreciate
> > if you could explain your xfstests setup and the way you told
> > it about the virtio mounts.
> 
> In order to run the tests on virtio-fs, I did the following changes in 
> xfstest[1].
> 
> diff --git a/check b/check

Thanks; it would be great if you could send these changes upstream
to xfstest;  I know there's at least one other person who has written
xfstest changes.

I'm hitting some problems getting it to run; it seems to be hitting
a reliable NFS kernel client OOPS on the fedora kernels I'm using
on the host on generic/013

I have repeated the access time error you're seeing for generic/003.

Dave

> The command to run xfstests for virtio-fs is:
> $ ./check -virtiofs generic/???
> 
> [1] https://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
> 
> > 
> > Dave
> > 
> >>
> >>> We have run the generic tests of xfstest for virtio-fs[1] on aarch64[2],
> >>> here we selected some tests that did not run or failed to run,
> >>> and we categorized them, basing on the reasons in our understanding.
> >>
> >> Thanks for sharing your test results.
> >>
> >>>    * Category 1: generic/003, generic/192
> >>>      Error: access time error
> >>>      Reason: file_accessed() not run
> >>>    * Category 2: generic/089, generic/478, generic/484, generic/504
> >>>      Error: lock error
> >>>    * Category 3: generic/426, generic/467, generic/477
> >>>      Error: open_by_handle error
> >>>    * Category 4: generic/551
> >>>      Error: kvm panic
> >>
> >> I'm not expecting a KVM panic; can you give us a copy of the
> >> oops/panic/backtrace you're seeing?
> 
> Sorry, In my recent tests, the KVM panic didn't happen, but the OOM 
> event occurred. I will expand the memory and test it again, please give 
> me a little more time.
> 
> >>
> >>>    * Category 5: generic/011, generic/013
> >>>      Error: cannot remove file
> >>>      Reason: NFS backend
> >>>    * Category 6: generic/035
> >>>      Error: nlink is 1, should be 0
> >>>    * Category 7: generic/125, generic/193, generic/314
> >>>      Error: open/chown/mkdir permission error
> >>>    * Category 8: generic/469
> >>>      Error: fallocate keep_size is needed
> >>>      Reason: NFS4.0 backend
> >>>    * Category 9: generic/323
> >>>      Error: system hang
> >>>      Reason: fd is close before AIO finished
> >>
> >> When you 'say system hang' - you mean the whole guest hanging?
> >> Did the virtiofsd process hang or crash?
> 
> No, not the whole guest, only the test process hanging. The virtiofsd 
> process keeps working. Here are some debug messages:
> 
> [ 7740.126845] INFO: task aio-last-ref-he:3361 blocked for more than 122 
> seconds.
> [ 7740.128884]       Not tainted 5.4.0-rc1-aarch64-5.4-rc1 #1
> [ 7740.130364] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
> disables this message.
> [ 7740.132472] aio-last-ref-he D    0  3361   3143 0x00000220
> [ 7740.133954] Call trace:
> [ 7740.134627]  __switch_to+0x98/0x1e0
> [ 7740.135579]  __schedule+0x29c/0x688
> [ 7740.136527]  schedule+0x38/0xb8
> [ 7740.137615]  schedule_timeout+0x258/0x358
> [ 7740.139160]  wait_for_completion+0x174/0x400
> [ 7740.140322]  exit_aio+0x118/0x6c0
> [ 7740.141226]  mmput+0x6c/0x1c0
> [ 7740.142036]  do_exit+0x29c/0xa58
> [ 7740.142915]  do_group_exit+0x48/0xb0
> [ 7740.143888]  get_signal+0x168/0x8b0
> [ 7740.144836]  do_notify_resume+0x174/0x3d8
> [ 7740.145925]  work_pending+0x8/0x10
> [ 7863.006847] INFO: task aio-last-ref-he:3361 blocked for more than 245 
> seconds.
> [ 7863.008876]       Not tainted 5.4.0-rc1-aarch64-5.4-rc1 #1
> 
> Thanks,
> QI Fuli
> 
> >>>
> >>> We would like to know if virtio-fs does not support these tests in
> >>> the specification or they are bugs that need to be fixed.
> >>> It would be very appreciated if anyone could give some comments.
> >>
> >> It'll take us a few days to go through and figure that out; we'll
> >> try and replicate it.
> >>
> >> Dave
> >>
> >>>
> >>> [1] qemu: https://gitlab.com/virtio-fs/qemu/tree/virtio-fs-dev
> >>>       start qemu script:
> >>>       $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu1 -o
> >>> source=/root/virtio-fs/test1/ -o cache=always -o xattr -o flock -d &
> >>>       $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu2 -o
> >>> source=/root/virtio-fs/test2/ -o cache=always -o xattr -o flock -d &
> >>>       $QEMU -M virt,accel=kvm,gic_version=3 \
> >>>           -cpu host \
> >>>           -smp 8 \
> >>>           -m 8192\
> >>>           -nographic \
> >>>           -serial mon:stdio \
> >>>           -netdev tap,id=net0 -device
> >>> virtio-net-pci,netdev=net0,id=net0,mac=XX:XX:XX:XX:XX:XX \
> >>>           -object
> >>> memory-backend-file,id=mem,size=8G,mem-path=/dev/shm,share=on \
> >>>           -numa node,memdev=mem \
> >>>           -drive
> >>> file=/root/virtio-fs/AAVMF/AAVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on
> >>> \
> >>>           -drive file=$VARS,if=pflash,format=raw,unit=1 \
> >>>           -chardev socket,id=char1,path=/tmp/vhostqemu1 \
> >>>           -device
> >>> vhost-user-fs-pci,queue-size=1024,chardev=char1,tag=myfs1,cache-size=0 \
> >>>           -chardev socket,id=char2,path=/tmp/vhostqemu2 \
> >>>           -device
> >>> vhost-user-fs-pci,queue-size=1024,chardev=char2,tag=myfs2,cache-size=0 \
> >>>           -drive if=virtio,file=/var/lib/libvirt/images/guest.img
> >>>
> >>> [2] host kernel: 4.18.0-80.4.2.el8_0.aarch64
> >>>       guest kernel: 5.4-rc1
> >>>       Arch: Arm64
> >>>       backend: NFS 4.0
> >>>
> >>> Thanks,
> >>> QI Fuli
> >>>
> >>> _______________________________________________
> >>> Virtio-fs mailing list
> >>> Virtio-fs@redhat.com
> >>> https://www.redhat.com/mailman/listinfo/virtio-fs
> >> --
> >> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >>
> >> _______________________________________________
> >> Virtio-fs mailing list
> >> Virtio-fs@redhat.com
> >> https://www.redhat.com/mailman/listinfo/virtio-fs
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> > 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-10  9:57     ` qi.fuli
  2019-10-11  9:21       ` Dr. David Alan Gilbert
@ 2019-10-11 18:45       ` Chirantan Ekbote
  2019-10-11 19:59         ` Vivek Goyal
  1 sibling, 1 reply; 32+ messages in thread
From: Chirantan Ekbote @ 2019-10-11 18:45 UTC (permalink / raw)
  To: qi.fuli; +Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma

On Thu, Oct 10, 2019 at 7:00 PM qi.fuli@fujitsu.com <qi.fuli@fujitsu.com> wrote
>
>
> [ 7740.126845] INFO: task aio-last-ref-he:3361 blocked for more than 122
> seconds.
> [ 7740.128884]       Not tainted 5.4.0-rc1-aarch64-5.4-rc1 #1
> [ 7740.130364] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [ 7740.132472] aio-last-ref-he D    0  3361   3143 0x00000220
> [ 7740.133954] Call trace:
> [ 7740.134627]  __switch_to+0x98/0x1e0
> [ 7740.135579]  __schedule+0x29c/0x688
> [ 7740.136527]  schedule+0x38/0xb8
> [ 7740.137615]  schedule_timeout+0x258/0x358
> [ 7740.139160]  wait_for_completion+0x174/0x400
> [ 7740.140322]  exit_aio+0x118/0x6c0
> [ 7740.141226]  mmput+0x6c/0x1c0
> [ 7740.142036]  do_exit+0x29c/0xa58
> [ 7740.142915]  do_group_exit+0x48/0xb0
> [ 7740.143888]  get_signal+0x168/0x8b0
> [ 7740.144836]  do_notify_resume+0x174/0x3d8
> [ 7740.145925]  work_pending+0x8/0x10
> [ 7863.006847] INFO: task aio-last-ref-he:3361 blocked for more than 245
> seconds.
> [ 7863.008876]       Not tainted 5.4.0-rc1-aarch64-5.4-rc1 #1
>

I have also seen this hang while writing the virtio-fs device for
crosvm.  The issue is with the retry logic when the virtqueue is full,
which happens very easily when the device doesn't support DAX.

virtio_fs_wake_pending_and_unlock has code like this:

retry:
    ret = virtio_fs_enqueue_req(&fs->vqs[queue_id], req);
    if (ret < 0) {
        if (ret == -ENOMEM || ret == -ENOSPC) {
            /* Virtqueue full. Retry submission */
            /* TODO use completion instead of timeout */
            usleep_range(20, 30);
            goto retry;
        }


Spinning here is completely unsafe as this method may be called while
the fc->bg_lock is held.  The call chain is
`fuse_request_queue_background (acquire fc->bg_lock) -> flush_bg_queue
-> queue_request_and_unlock -> virtio_fs_wake_pending_and_unlock (spin
until virtqueue is not full)`.  This lock is also acquired when
finishing requests if there was a background request:
`virtio_fs_requests_done_work (pull completed requests out of
virtqueue) -> fuse_request_end (acquire fc->bg_lock when the
FR_BACKGROUND bit is set in req->flags)`.  This is a deadlock where
one thread keeps spinning waiting for the virtqueue to empty but the
thread that can actually empty the virtqueue is blocked on acquiring a
spin lock held by the original thread.

I have a local patch that tries to fix this by putting requests in
fpq->io before queueing them.  Then if the virtqueue is full, they are
removed from fpq->io and put on fpq->processing and
virtio_fs_requests_done has code to queue requests from
fpq->processing after it empties the virtqueue.  I don't know if
that's the proper way to fix it but it does make the hang go away.


This is unrelated but there's also a nullptr dereference in the driver
if the device advertises more than request queue.  This is problematic
code:

/* Allocate fuse_dev for hiprio and notification queues */
for (i = 0; i < VQ_REQUEST; i++) {
    struct virtio_fs_vq *fsvq = &fs->vqs[i];

    fsvq->fud = fuse_dev_alloc();
    if (!fsvq->fud)
        goto err_free_fuse_devs;
}

ctx.fudptr = (void **)&fs->vqs[VQ_REQUEST].fud;
err = fuse_fill_super_common(sb, &ctx);
if (err < 0)
    goto err_free_fuse_devs;

fc = fs->vqs[VQ_REQUEST].fud->fc;

for (i = 0; i < fs->nvqs; i++) {
    struct virtio_fs_vq *fsvq = &fs->vqs[i];

    if (i == VQ_REQUEST)
        continue; /* already initialized */
    fuse_dev_install(fsvq->fud, fc);
}

The first loop only initializes fsvq->fud for queues up to VQ_REQUEST
but then the second loop calls fuse_dev_install with fsvq->fud for all
queues up to fs->nvqs.  When fs->nvqs is greater than VQ_REQUEST this
will end up dereferencing an uninitialized pointer.  I think the fix
is to skip calling fuse_dev_alloc for the VQ_REQUEST queue but then
call it for all the other queues up to fs->nvqs.


Chirantan


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-11 18:45       ` Chirantan Ekbote
@ 2019-10-11 19:59         ` Vivek Goyal
  2019-10-11 20:13           ` Chirantan Ekbote
  0 siblings, 1 reply; 32+ messages in thread
From: Vivek Goyal @ 2019-10-11 19:59 UTC (permalink / raw)
  To: Chirantan Ekbote; +Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma

On Sat, Oct 12, 2019 at 03:45:01AM +0900, Chirantan Ekbote wrote:
> On Thu, Oct 10, 2019 at 7:00 PM qi.fuli@fujitsu.com <qi.fuli@fujitsu.com> wrote
> >
> >
> > [ 7740.126845] INFO: task aio-last-ref-he:3361 blocked for more than 122
> > seconds.
> > [ 7740.128884]       Not tainted 5.4.0-rc1-aarch64-5.4-rc1 #1
> > [ 7740.130364] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > disables this message.
> > [ 7740.132472] aio-last-ref-he D    0  3361   3143 0x00000220
> > [ 7740.133954] Call trace:
> > [ 7740.134627]  __switch_to+0x98/0x1e0
> > [ 7740.135579]  __schedule+0x29c/0x688
> > [ 7740.136527]  schedule+0x38/0xb8
> > [ 7740.137615]  schedule_timeout+0x258/0x358
> > [ 7740.139160]  wait_for_completion+0x174/0x400
> > [ 7740.140322]  exit_aio+0x118/0x6c0
> > [ 7740.141226]  mmput+0x6c/0x1c0
> > [ 7740.142036]  do_exit+0x29c/0xa58
> > [ 7740.142915]  do_group_exit+0x48/0xb0
> > [ 7740.143888]  get_signal+0x168/0x8b0
> > [ 7740.144836]  do_notify_resume+0x174/0x3d8
> > [ 7740.145925]  work_pending+0x8/0x10
> > [ 7863.006847] INFO: task aio-last-ref-he:3361 blocked for more than 245
> > seconds.
> > [ 7863.008876]       Not tainted 5.4.0-rc1-aarch64-5.4-rc1 #1
> >
> 
> I have also seen this hang while writing the virtio-fs device for
> crosvm.  The issue is with the retry logic when the virtqueue is full,
> which happens very easily when the device doesn't support DAX.
> 
> virtio_fs_wake_pending_and_unlock has code like this:
> 
> retry:
>     ret = virtio_fs_enqueue_req(&fs->vqs[queue_id], req);
>     if (ret < 0) {
>         if (ret == -ENOMEM || ret == -ENOSPC) {
>             /* Virtqueue full. Retry submission */
>             /* TODO use completion instead of timeout */
>             usleep_range(20, 30);
>             goto retry;
>         }
> 
> 
> Spinning here is completely unsafe as this method may be called while
> the fc->bg_lock is held.  The call chain is
> `fuse_request_queue_background (acquire fc->bg_lock) -> flush_bg_queue
> -> queue_request_and_unlock -> virtio_fs_wake_pending_and_unlock (spin
> until virtqueue is not full)`.  This lock is also acquired when
> finishing requests if there was a background request:
> `virtio_fs_requests_done_work (pull completed requests out of
> virtqueue) -> fuse_request_end (acquire fc->bg_lock when the
> FR_BACKGROUND bit is set in req->flags)`.  This is a deadlock where
> one thread keeps spinning waiting for the virtqueue to empty but the
> thread that can actually empty the virtqueue is blocked on acquiring a
> spin lock held by the original thread.

Agreed. This is an issue. I recently noticed fc->bg_lock issue in case
of completing request in case of error. As we might be called with
fc->bg_lock held and request completion takes fc->bg_lock as well,
so we can't complete request in caller's context. I have a patch
queued internally to put requests on a list and end these with the
help of a worker (in case of submission error).

> 
> I have a local patch that tries to fix this by putting requests in
> fpq->io before queueing them.  Then if the virtqueue is full, they are
> removed from fpq->io and put on fpq->processing and
> virtio_fs_requests_done has code to queue requests from
> fpq->processing after it empties the virtqueue.  I don't know if
> that's the proper way to fix it but it does make the hang go away.

I will look into fixing this.
> 
> 
> This is unrelated but there's also a nullptr dereference in the driver
> if the device advertises more than request queue.  This is problematic
> code:
> 
> /* Allocate fuse_dev for hiprio and notification queues */
> for (i = 0; i < VQ_REQUEST; i++) {
>     struct virtio_fs_vq *fsvq = &fs->vqs[i];
> 
>     fsvq->fud = fuse_dev_alloc();
>     if (!fsvq->fud)
>         goto err_free_fuse_devs;
> }
> 
> ctx.fudptr = (void **)&fs->vqs[VQ_REQUEST].fud;
> err = fuse_fill_super_common(sb, &ctx);
> if (err < 0)
>     goto err_free_fuse_devs;
> 
> fc = fs->vqs[VQ_REQUEST].fud->fc;
> 
> for (i = 0; i < fs->nvqs; i++) {
>     struct virtio_fs_vq *fsvq = &fs->vqs[i];
> 
>     if (i == VQ_REQUEST)
>         continue; /* already initialized */
>     fuse_dev_install(fsvq->fud, fc);
> }
> 
> The first loop only initializes fsvq->fud for queues up to VQ_REQUEST
> but then the second loop calls fuse_dev_install with fsvq->fud for all
> queues up to fs->nvqs.  When fs->nvqs is greater than VQ_REQUEST this
> will end up dereferencing an uninitialized pointer.  I think the fix
> is to skip calling fuse_dev_alloc for the VQ_REQUEST queue but then
> call it for all the other queues up to fs->nvqs.

Right now we support only 1 request queue and multi queue support is
not there. Planning to add multiqueue support in future releases though.

But I will probably cleanup the code in setup_vqs() to only accept one
queue even if device advertises more than one queue. 

Thanks
Vivek


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-11 19:59         ` Vivek Goyal
@ 2019-10-11 20:13           ` Chirantan Ekbote
  2019-10-11 20:36             ` Vivek Goyal
  0 siblings, 1 reply; 32+ messages in thread
From: Chirantan Ekbote @ 2019-10-11 20:13 UTC (permalink / raw)
  To: Vivek Goyal; +Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma

On Sat, Oct 12, 2019 at 4:59 AM Vivek Goyal <vgoyal@redhat.com> wrote:
>
> On Sat, Oct 12, 2019 at 03:45:01AM +0900, Chirantan Ekbote wrote:
> > This is unrelated but there's also a nullptr dereference in the driver
> > if the device advertises more than request queue.  This is problematic
> > code:
> >
> > /* Allocate fuse_dev for hiprio and notification queues */
> > for (i = 0; i < VQ_REQUEST; i++) {
> >     struct virtio_fs_vq *fsvq = &fs->vqs[i];
> >
> >     fsvq->fud = fuse_dev_alloc();
> >     if (!fsvq->fud)
> >         goto err_free_fuse_devs;
> > }
> >
> > ctx.fudptr = (void **)&fs->vqs[VQ_REQUEST].fud;
> > err = fuse_fill_super_common(sb, &ctx);
> > if (err < 0)
> >     goto err_free_fuse_devs;
> >
> > fc = fs->vqs[VQ_REQUEST].fud->fc;
> >
> > for (i = 0; i < fs->nvqs; i++) {
> >     struct virtio_fs_vq *fsvq = &fs->vqs[i];
> >
> >     if (i == VQ_REQUEST)
> >         continue; /* already initialized */
> >     fuse_dev_install(fsvq->fud, fc);
> > }
> >
> > The first loop only initializes fsvq->fud for queues up to VQ_REQUEST
> > but then the second loop calls fuse_dev_install with fsvq->fud for all
> > queues up to fs->nvqs.  When fs->nvqs is greater than VQ_REQUEST this
> > will end up dereferencing an uninitialized pointer.  I think the fix
> > is to skip calling fuse_dev_alloc for the VQ_REQUEST queue but then
> > call it for all the other queues up to fs->nvqs.
>
> Right now we support only 1 request queue and multi queue support is
> not there. Planning to add multiqueue support in future releases though.
>

Could you elaborate on what is missing for multiqueue support?  For
reference, this patch was enough to get it working for me:

@@ -922,7 +990,8 @@ static int virtio_fs_enqueue_req(struct virtqueue
*vq, struct fuse_req *req)
 static void virtio_fs_wake_pending_and_unlock(struct fuse_iqueue *fiq)
 __releases(fiq->waitq.lock)
 {
-       unsigned queue_id = VQ_REQUEST; /* TODO multiqueue */
+       /* unsigned queue_id = VQ_REQUEST; /\* TODO multiqueue *\/ */
+       unsigned queue_id;
        struct virtio_fs *fs;
        struct fuse_conn *fc;
        struct fuse_req *req;
@@ -937,6 +1006,7 @@ __releases(fiq->waitq.lock)
        spin_unlock(&fiq->waitq.lock);

        fs = fiq->priv;
+       queue_id = (req->in.h.unique % (fs->nvqs - 1)) + 1;
        fc = fs->vqs[queue_id].fud->fc;

        dev_dbg(&fs->vqs[queue_id].vq->vdev->dev,


This is simply round-robin scheduling but even going from one to two
queues gives a significant performance improvement (especially because
crosvm doesn't support shared memory regions yet).

Chirantan


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-11 20:13           ` Chirantan Ekbote
@ 2019-10-11 20:36             ` Vivek Goyal
  2019-10-14  9:11               ` Stefan Hajnoczi
  0 siblings, 1 reply; 32+ messages in thread
From: Vivek Goyal @ 2019-10-11 20:36 UTC (permalink / raw)
  To: Chirantan Ekbote; +Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma

On Sat, Oct 12, 2019 at 05:13:51AM +0900, Chirantan Ekbote wrote:
> On Sat, Oct 12, 2019 at 4:59 AM Vivek Goyal <vgoyal@redhat.com> wrote:
> >
> > On Sat, Oct 12, 2019 at 03:45:01AM +0900, Chirantan Ekbote wrote:
> > > This is unrelated but there's also a nullptr dereference in the driver
> > > if the device advertises more than request queue.  This is problematic
> > > code:
> > >
> > > /* Allocate fuse_dev for hiprio and notification queues */
> > > for (i = 0; i < VQ_REQUEST; i++) {
> > >     struct virtio_fs_vq *fsvq = &fs->vqs[i];
> > >
> > >     fsvq->fud = fuse_dev_alloc();
> > >     if (!fsvq->fud)
> > >         goto err_free_fuse_devs;
> > > }
> > >
> > > ctx.fudptr = (void **)&fs->vqs[VQ_REQUEST].fud;
> > > err = fuse_fill_super_common(sb, &ctx);
> > > if (err < 0)
> > >     goto err_free_fuse_devs;
> > >
> > > fc = fs->vqs[VQ_REQUEST].fud->fc;
> > >
> > > for (i = 0; i < fs->nvqs; i++) {
> > >     struct virtio_fs_vq *fsvq = &fs->vqs[i];
> > >
> > >     if (i == VQ_REQUEST)
> > >         continue; /* already initialized */
> > >     fuse_dev_install(fsvq->fud, fc);
> > > }
> > >
> > > The first loop only initializes fsvq->fud for queues up to VQ_REQUEST
> > > but then the second loop calls fuse_dev_install with fsvq->fud for all
> > > queues up to fs->nvqs.  When fs->nvqs is greater than VQ_REQUEST this
> > > will end up dereferencing an uninitialized pointer.  I think the fix
> > > is to skip calling fuse_dev_alloc for the VQ_REQUEST queue but then
> > > call it for all the other queues up to fs->nvqs.
> >
> > Right now we support only 1 request queue and multi queue support is
> > not there. Planning to add multiqueue support in future releases though.
> >
> 
> Could you elaborate on what is missing for multiqueue support?  For
> reference, this patch was enough to get it working for me:

May be not much is missing. Just that were focussed on first getting a
basic version of virtiofs upstream and then start looking at performance
improvement features.

> 
> @@ -922,7 +990,8 @@ static int virtio_fs_enqueue_req(struct virtqueue
> *vq, struct fuse_req *req)
>  static void virtio_fs_wake_pending_and_unlock(struct fuse_iqueue *fiq)
>  __releases(fiq->waitq.lock)
>  {
> -       unsigned queue_id = VQ_REQUEST; /* TODO multiqueue */
> +       /* unsigned queue_id = VQ_REQUEST; /\* TODO multiqueue *\/ */
> +       unsigned queue_id;
>         struct virtio_fs *fs;
>         struct fuse_conn *fc;
>         struct fuse_req *req;
> @@ -937,6 +1006,7 @@ __releases(fiq->waitq.lock)
>         spin_unlock(&fiq->waitq.lock);
> 
>         fs = fiq->priv;
> +       queue_id = (req->in.h.unique % (fs->nvqs - 1)) + 1;
>         fc = fs->vqs[queue_id].fud->fc;
> 
>         dev_dbg(&fs->vqs[queue_id].vq->vdev->dev,
> 
> 
> This is simply round-robin scheduling but even going from one to two
> queues gives a significant performance improvement (especially because
> crosvm doesn't support shared memory regions yet).

Interesting. I thought virtiofsd is hard coded right now to support
only one queue. Did you modify virtiofsd to support more than one
request queue?

Can you give some concrete numbers. How much performance improvement
do you see with multiqueue. What workload you are using for testing.

Thanks
Vivek


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-11 20:36             ` Vivek Goyal
@ 2019-10-14  9:11               ` Stefan Hajnoczi
  2019-10-15 14:58                 ` Chirantan Ekbote
  0 siblings, 1 reply; 32+ messages in thread
From: Stefan Hajnoczi @ 2019-10-14  9:11 UTC (permalink / raw)
  To: Vivek Goyal; +Cc: virtio-fs, masayoshi.mizuma, misono.tomohiro

[-- Attachment #1: Type: text/plain, Size: 1697 bytes --]

On Fri, Oct 11, 2019 at 04:36:47PM -0400, Vivek Goyal wrote:
> On Sat, Oct 12, 2019 at 05:13:51AM +0900, Chirantan Ekbote wrote:
> > On Sat, Oct 12, 2019 at 4:59 AM Vivek Goyal <vgoyal@redhat.com> wrote:
> > @@ -922,7 +990,8 @@ static int virtio_fs_enqueue_req(struct virtqueue
> > *vq, struct fuse_req *req)
> >  static void virtio_fs_wake_pending_and_unlock(struct fuse_iqueue *fiq)
> >  __releases(fiq->waitq.lock)
> >  {
> > -       unsigned queue_id = VQ_REQUEST; /* TODO multiqueue */
> > +       /* unsigned queue_id = VQ_REQUEST; /\* TODO multiqueue *\/ */
> > +       unsigned queue_id;
> >         struct virtio_fs *fs;
> >         struct fuse_conn *fc;
> >         struct fuse_req *req;
> > @@ -937,6 +1006,7 @@ __releases(fiq->waitq.lock)
> >         spin_unlock(&fiq->waitq.lock);
> > 
> >         fs = fiq->priv;
> > +       queue_id = (req->in.h.unique % (fs->nvqs - 1)) + 1;
> >         fc = fs->vqs[queue_id].fud->fc;
> > 
> >         dev_dbg(&fs->vqs[queue_id].vq->vdev->dev,
> > 
> > 
> > This is simply round-robin scheduling but even going from one to two
> > queues gives a significant performance improvement (especially because
> > crosvm doesn't support shared memory regions yet).
> 
> Interesting. I thought virtiofsd is hard coded right now to support
> only one queue. Did you modify virtiofsd to support more than one
> request queue?

Right, virtiofsd currently refuses to bring up more than 1 request
queue.  The code can actually handle multiqueue now but there is no
command-line support for it yet.  The ability to set CPU affinity for
virtqueue threads could be introduced at the same time as enabling
multiqueue.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-14  9:11               ` Stefan Hajnoczi
@ 2019-10-15 14:58                 ` Chirantan Ekbote
  2019-10-15 15:57                   ` Dr. David Alan Gilbert
  2019-10-16 14:09                   ` Stefan Hajnoczi
  0 siblings, 2 replies; 32+ messages in thread
From: Chirantan Ekbote @ 2019-10-15 14:58 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma, Vivek Goyal

On Mon, Oct 14, 2019 at 6:11 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> On Fri, Oct 11, 2019 at 04:36:47PM -0400, Vivek Goyal wrote:
> > On Sat, Oct 12, 2019 at 05:13:51AM +0900, Chirantan Ekbote wrote:
> > > On Sat, Oct 12, 2019 at 4:59 AM Vivek Goyal <vgoyal@redhat.com> wrote:
> > > @@ -922,7 +990,8 @@ static int virtio_fs_enqueue_req(struct virtqueue
> > > *vq, struct fuse_req *req)
> > >  static void virtio_fs_wake_pending_and_unlock(struct fuse_iqueue *fiq)
> > >  __releases(fiq->waitq.lock)
> > >  {
> > > -       unsigned queue_id = VQ_REQUEST; /* TODO multiqueue */
> > > +       /* unsigned queue_id = VQ_REQUEST; /\* TODO multiqueue *\/ */
> > > +       unsigned queue_id;
> > >         struct virtio_fs *fs;
> > >         struct fuse_conn *fc;
> > >         struct fuse_req *req;
> > > @@ -937,6 +1006,7 @@ __releases(fiq->waitq.lock)
> > >         spin_unlock(&fiq->waitq.lock);
> > >
> > >         fs = fiq->priv;
> > > +       queue_id = (req->in.h.unique % (fs->nvqs - 1)) + 1;
> > >         fc = fs->vqs[queue_id].fud->fc;
> > >
> > >         dev_dbg(&fs->vqs[queue_id].vq->vdev->dev,
> > >
> > >
> > > This is simply round-robin scheduling but even going from one to two
> > > queues gives a significant performance improvement (especially because
> > > crosvm doesn't support shared memory regions yet).
> >
> > Interesting. I thought virtiofsd is hard coded right now to support
> > only one queue. Did you modify virtiofsd to support more than one
> > request queue?
>
> Right, virtiofsd currently refuses to bring up more than 1 request
> queue.  The code can actually handle multiqueue now but there is no
> command-line support for it yet.  The ability to set CPU affinity for
> virtqueue threads could be introduced at the same time as enabling
> multiqueue.
>

I'm not using virtiofsd.  We have our own server for crosvm, which
supports multiple queues.

As for the performance numbers, I don't have my test device with me
but if I remember correctly the blogbench scores almost doubled when
going from one queue to two queues.


Chirantan


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-15 14:58                 ` Chirantan Ekbote
@ 2019-10-15 15:57                   ` Dr. David Alan Gilbert
  2019-10-15 16:11                     ` Chirantan Ekbote
  2019-10-16 14:09                   ` Stefan Hajnoczi
  1 sibling, 1 reply; 32+ messages in thread
From: Dr. David Alan Gilbert @ 2019-10-15 15:57 UTC (permalink / raw)
  To: Chirantan Ekbote
  Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma, Vivek Goyal

* Chirantan Ekbote (chirantan@chromium.org) wrote:
> On Mon, Oct 14, 2019 at 6:11 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >
> > On Fri, Oct 11, 2019 at 04:36:47PM -0400, Vivek Goyal wrote:
> > > On Sat, Oct 12, 2019 at 05:13:51AM +0900, Chirantan Ekbote wrote:
> > > > On Sat, Oct 12, 2019 at 4:59 AM Vivek Goyal <vgoyal@redhat.com> wrote:
> > > > @@ -922,7 +990,8 @@ static int virtio_fs_enqueue_req(struct virtqueue
> > > > *vq, struct fuse_req *req)
> > > >  static void virtio_fs_wake_pending_and_unlock(struct fuse_iqueue *fiq)
> > > >  __releases(fiq->waitq.lock)
> > > >  {
> > > > -       unsigned queue_id = VQ_REQUEST; /* TODO multiqueue */
> > > > +       /* unsigned queue_id = VQ_REQUEST; /\* TODO multiqueue *\/ */
> > > > +       unsigned queue_id;
> > > >         struct virtio_fs *fs;
> > > >         struct fuse_conn *fc;
> > > >         struct fuse_req *req;
> > > > @@ -937,6 +1006,7 @@ __releases(fiq->waitq.lock)
> > > >         spin_unlock(&fiq->waitq.lock);
> > > >
> > > >         fs = fiq->priv;
> > > > +       queue_id = (req->in.h.unique % (fs->nvqs - 1)) + 1;
> > > >         fc = fs->vqs[queue_id].fud->fc;
> > > >
> > > >         dev_dbg(&fs->vqs[queue_id].vq->vdev->dev,
> > > >
> > > >
> > > > This is simply round-robin scheduling but even going from one to two
> > > > queues gives a significant performance improvement (especially because
> > > > crosvm doesn't support shared memory regions yet).
> > >
> > > Interesting. I thought virtiofsd is hard coded right now to support
> > > only one queue. Did you modify virtiofsd to support more than one
> > > request queue?
> >
> > Right, virtiofsd currently refuses to bring up more than 1 request
> > queue.  The code can actually handle multiqueue now but there is no
> > command-line support for it yet.  The ability to set CPU affinity for
> > virtqueue threads could be introduced at the same time as enabling
> > multiqueue.
> >
> 
> I'm not using virtiofsd.  We have our own server for crosvm, which
> supports multiple queues.

Ah excellent; is that public anywhere?

> As for the performance numbers, I don't have my test device with me
> but if I remember correctly the blogbench scores almost doubled when
> going from one queue to two queues.

Very nice.

Dave

> 
> Chirantan
> 
> _______________________________________________
> Virtio-fs mailing list
> Virtio-fs@redhat.com
> https://www.redhat.com/mailman/listinfo/virtio-fs
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-15 15:57                   ` Dr. David Alan Gilbert
@ 2019-10-15 16:11                     ` Chirantan Ekbote
  2019-10-15 16:26                       ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 32+ messages in thread
From: Chirantan Ekbote @ 2019-10-15 16:11 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma, Vivek Goyal

On Wed, Oct 16, 2019 at 12:58 AM Dr. David Alan Gilbert
<dgilbert@redhat.com> wrote:
>
> * Chirantan Ekbote (chirantan@chromium.org) wrote:
> >
> > I'm not using virtiofsd.  We have our own server for crosvm, which
> > supports multiple queues.
>
> Ah excellent; is that public anywhere?
>

It's currently under review here:
https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/1758103

I expect to have it merged sometime this week.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-15 16:11                     ` Chirantan Ekbote
@ 2019-10-15 16:26                       ` Dr. David Alan Gilbert
  2019-10-15 17:28                         ` Boeuf, Sebastien
  0 siblings, 1 reply; 32+ messages in thread
From: Dr. David Alan Gilbert @ 2019-10-15 16:26 UTC (permalink / raw)
  To: Chirantan Ekbote, sebastien.boeuf, slp
  Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma, Vivek Goyal

* Chirantan Ekbote (chirantan@chromium.org) wrote:
> On Wed, Oct 16, 2019 at 12:58 AM Dr. David Alan Gilbert
> <dgilbert@redhat.com> wrote:
> >
> > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > >
> > > I'm not using virtiofsd.  We have our own server for crosvm, which
> > > supports multiple queues.
> >
> > Ah excellent; is that public anywhere?
> >
> 
> It's currently under review here:
> https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/1758103
> 
> I expect to have it merged sometime this week.

Very nice; copying in Seb and Sergio.

I'll try and have a look through that.

Dave

--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-15 16:26                       ` Dr. David Alan Gilbert
@ 2019-10-15 17:28                         ` Boeuf, Sebastien
  2019-10-15 18:21                           ` Chirantan Ekbote
  0 siblings, 1 reply; 32+ messages in thread
From: Boeuf, Sebastien @ 2019-10-15 17:28 UTC (permalink / raw)
  To: dgilbert, slp, chirantan
  Cc: virtio-fs, masayoshi.mizuma, vgoyal, misono.tomohiro

Thanks Dave for adding me to the loop here!

On Tue, 2019-10-15 at 17:26 +0100, Dr. David Alan Gilbert wrote:
> * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > On Wed, Oct 16, 2019 at 12:58 AM Dr. David Alan Gilbert
> > <dgilbert@redhat.com> wrote:
> > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > I'm not using virtiofsd.  We have our own server for crosvm,
> > > > which
> > > > supports multiple queues.
> > > 
> > > Ah excellent; is that public anywhere?
> > > 
> > 
> > It's currently under review here:
> > https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/1758103
> > 
> > I expect to have it merged sometime this week.

Oh that's very nice to hear. We have some code from the cloud-
hypervisor project where we have a vhost-user-net implementation
relying on some sort of a wrapper code similar to what all vhost-user
daemon implementations would reuse (libvhost-user kind of thing).

We're planning to see this wrapper code land on Rust-VMM at some point,
which is why it would be interesting if you could rebase this virtio-fs 
daemon code on top of this small library. If you don't have the
bandwidth, we could probably look at it if that's something you would
be interested in.

Here is the code for the small wrapper crate: 
https://github.com/intel/cloud-hypervisor/blob/master/vhost_user_backend/src/lib.rs

And here is the vhost-user-net example using it:
https://github.com/intel/cloud-hypervisor/blob/master/src/bin/vhost_user_net.rs

Thanks,
Sebastien

> 
> Very nice; copying in Seb and Sergio.
> 
> I'll try and have a look through that.
> 
> Dave
> 
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-15 17:28                         ` Boeuf, Sebastien
@ 2019-10-15 18:21                           ` Chirantan Ekbote
  2019-10-16 14:14                             ` Stefan Hajnoczi
  2019-10-16 16:11                             ` Boeuf, Sebastien
  0 siblings, 2 replies; 32+ messages in thread
From: Chirantan Ekbote @ 2019-10-15 18:21 UTC (permalink / raw)
  To: Boeuf, Sebastien
  Cc: slp, virtio-fs, misono.tomohiro, masayoshi.mizuma, vgoyal

On Wed, Oct 16, 2019 at 2:28 AM Boeuf, Sebastien
<sebastien.boeuf@intel.com> wrote:
>
> Thanks Dave for adding me to the loop here!
>
> On Tue, 2019-10-15 at 17:26 +0100, Dr. David Alan Gilbert wrote:
> > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > On Wed, Oct 16, 2019 at 12:58 AM Dr. David Alan Gilbert
> > > <dgilbert@redhat.com> wrote:
> > > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > > I'm not using virtiofsd.  We have our own server for crosvm,
> > > > > which
> > > > > supports multiple queues.
> > > >
> > > > Ah excellent; is that public anywhere?
> > > >
> > >
> > > It's currently under review here:
> > > https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/1758103
> > >
> > > I expect to have it merged sometime this week.
>
> Oh that's very nice to hear. We have some code from the cloud-
> hypervisor project where we have a vhost-user-net implementation
> relying on some sort of a wrapper code similar to what all vhost-user
> daemon implementations would reuse (libvhost-user kind of thing).
>
> We're planning to see this wrapper code land on Rust-VMM at some point,
> which is why it would be interesting if you could rebase this virtio-fs
> daemon code on top of this small library. If you don't have the
> bandwidth, we could probably look at it if that's something you would
> be interested in.
>
> Here is the code for the small wrapper crate:
> https://github.com/intel/cloud-hypervisor/blob/master/vhost_user_backend/src/lib.rs
>
> And here is the vhost-user-net example using it:
> https://github.com/intel/cloud-hypervisor/blob/master/src/bin/vhost_user_net.rs
>

Our virtio-fs server is not a vhost-user device (this was one of the
reasons we didn't want to use virtiofsd).  It currently just runs
directly as a child process of crosvm (like all other crosvm devices).
So unless I'm missing something I don't think there's much to be
gained by rebasing on vhost_user_backend.

Chirantan


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-15 14:58                 ` Chirantan Ekbote
  2019-10-15 15:57                   ` Dr. David Alan Gilbert
@ 2019-10-16 14:09                   ` Stefan Hajnoczi
  2019-10-16 19:36                     ` Chirantan Ekbote
  1 sibling, 1 reply; 32+ messages in thread
From: Stefan Hajnoczi @ 2019-10-16 14:09 UTC (permalink / raw)
  To: Chirantan Ekbote
  Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma, Vivek Goyal

[-- Attachment #1: Type: text/plain, Size: 2835 bytes --]

On Tue, Oct 15, 2019 at 11:58:46PM +0900, Chirantan Ekbote wrote:
> On Mon, Oct 14, 2019 at 6:11 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >
> > On Fri, Oct 11, 2019 at 04:36:47PM -0400, Vivek Goyal wrote:
> > > On Sat, Oct 12, 2019 at 05:13:51AM +0900, Chirantan Ekbote wrote:
> > > > On Sat, Oct 12, 2019 at 4:59 AM Vivek Goyal <vgoyal@redhat.com> wrote:
> > > > @@ -922,7 +990,8 @@ static int virtio_fs_enqueue_req(struct virtqueue
> > > > *vq, struct fuse_req *req)
> > > >  static void virtio_fs_wake_pending_and_unlock(struct fuse_iqueue *fiq)
> > > >  __releases(fiq->waitq.lock)
> > > >  {
> > > > -       unsigned queue_id = VQ_REQUEST; /* TODO multiqueue */
> > > > +       /* unsigned queue_id = VQ_REQUEST; /\* TODO multiqueue *\/ */
> > > > +       unsigned queue_id;
> > > >         struct virtio_fs *fs;
> > > >         struct fuse_conn *fc;
> > > >         struct fuse_req *req;
> > > > @@ -937,6 +1006,7 @@ __releases(fiq->waitq.lock)
> > > >         spin_unlock(&fiq->waitq.lock);
> > > >
> > > >         fs = fiq->priv;
> > > > +       queue_id = (req->in.h.unique % (fs->nvqs - 1)) + 1;
> > > >         fc = fs->vqs[queue_id].fud->fc;
> > > >
> > > >         dev_dbg(&fs->vqs[queue_id].vq->vdev->dev,
> > > >
> > > >
> > > > This is simply round-robin scheduling but even going from one to two
> > > > queues gives a significant performance improvement (especially because
> > > > crosvm doesn't support shared memory regions yet).
> > >
> > > Interesting. I thought virtiofsd is hard coded right now to support
> > > only one queue. Did you modify virtiofsd to support more than one
> > > request queue?
> >
> > Right, virtiofsd currently refuses to bring up more than 1 request
> > queue.  The code can actually handle multiqueue now but there is no
> > command-line support for it yet.  The ability to set CPU affinity for
> > virtqueue threads could be introduced at the same time as enabling
> > multiqueue.
> >
> 
> I'm not using virtiofsd.  We have our own server for crosvm, which
> supports multiple queues.
> 
> As for the performance numbers, I don't have my test device with me
> but if I remember correctly the blogbench scores almost doubled when
> going from one queue to two queues.

If I understand the code correctly each virtqueue is processed by a
worker and the number of virtqueues is the vcpu count.  This means that
requests are processed in a blocking fashion and the disk is unlikely to
reach its maximum queue depth (e.g. often 64 requests).

Going from 1 virtqueue to 2 virtqueues will improve performance, but
it's still a far cry from the maximum queue depth that the host kernel
and hardware supports.  This is why virtiofsd processes multiple
requests in parallel on a single virtqueue using a thread pool.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-15 18:21                           ` Chirantan Ekbote
@ 2019-10-16 14:14                             ` Stefan Hajnoczi
  2019-10-16 16:14                               ` Boeuf, Sebastien
  2019-10-16 16:11                             ` Boeuf, Sebastien
  1 sibling, 1 reply; 32+ messages in thread
From: Stefan Hajnoczi @ 2019-10-16 14:14 UTC (permalink / raw)
  To: Chirantan Ekbote
  Cc: slp, virtio-fs, misono.tomohiro, Boeuf, Sebastien,
	masayoshi.mizuma, vgoyal

[-- Attachment #1: Type: text/plain, Size: 2791 bytes --]

On Wed, Oct 16, 2019 at 03:21:52AM +0900, Chirantan Ekbote wrote:
> On Wed, Oct 16, 2019 at 2:28 AM Boeuf, Sebastien
> <sebastien.boeuf@intel.com> wrote:
> >
> > Thanks Dave for adding me to the loop here!
> >
> > On Tue, 2019-10-15 at 17:26 +0100, Dr. David Alan Gilbert wrote:
> > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > On Wed, Oct 16, 2019 at 12:58 AM Dr. David Alan Gilbert
> > > > <dgilbert@redhat.com> wrote:
> > > > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > > > I'm not using virtiofsd.  We have our own server for crosvm,
> > > > > > which
> > > > > > supports multiple queues.
> > > > >
> > > > > Ah excellent; is that public anywhere?
> > > > >
> > > >
> > > > It's currently under review here:
> > > > https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/1758103
> > > >
> > > > I expect to have it merged sometime this week.
> >
> > Oh that's very nice to hear. We have some code from the cloud-
> > hypervisor project where we have a vhost-user-net implementation
> > relying on some sort of a wrapper code similar to what all vhost-user
> > daemon implementations would reuse (libvhost-user kind of thing).
> >
> > We're planning to see this wrapper code land on Rust-VMM at some point,
> > which is why it would be interesting if you could rebase this virtio-fs
> > daemon code on top of this small library. If you don't have the
> > bandwidth, we could probably look at it if that's something you would
> > be interested in.
> >
> > Here is the code for the small wrapper crate:
> > https://github.com/intel/cloud-hypervisor/blob/master/vhost_user_backend/src/lib.rs
> >
> > And here is the vhost-user-net example using it:
> > https://github.com/intel/cloud-hypervisor/blob/master/src/bin/vhost_user_net.rs
> >
> 
> Our virtio-fs server is not a vhost-user device (this was one of the
> reasons we didn't want to use virtiofsd).  It currently just runs
> directly as a child process of crosvm (like all other crosvm devices).
> So unless I'm missing something I don't think there's much to be
> gained by rebasing on vhost_user_backend.

I'm also interested in porting the server over to vhost-user so that
other VMMs like QEMU and cloud-hypervisor can use it.

From a quick look at the code last night, here are things that were not
carried over from virtiofsd:
 * DAX is not supported
 * The FUSE protocol version is a bit dated, e.g. copy_file_range() is
   not supported.
 * Requests are processed synchronously for each virtqueue and therefore
   performance is limited.  The server won't saturate the queue depth
   supported by the host kernel and hardware.

Have I missed anything?  Otherwise it looks quite close to
passthrough_ll.c.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-15 18:21                           ` Chirantan Ekbote
  2019-10-16 14:14                             ` Stefan Hajnoczi
@ 2019-10-16 16:11                             ` Boeuf, Sebastien
  2019-10-16 18:37                               ` Stefan Hajnoczi
  1 sibling, 1 reply; 32+ messages in thread
From: Boeuf, Sebastien @ 2019-10-16 16:11 UTC (permalink / raw)
  To: chirantan; +Cc: slp, virtio-fs, misono.tomohiro, masayoshi.mizuma, vgoyal

On Wed, 2019-10-16 at 03:21 +0900, Chirantan Ekbote wrote:
> On Wed, Oct 16, 2019 at 2:28 AM Boeuf, Sebastien
> <sebastien.boeuf@intel.com> wrote:
> > Thanks Dave for adding me to the loop here!
> > 
> > On Tue, 2019-10-15 at 17:26 +0100, Dr. David Alan Gilbert wrote:
> > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > On Wed, Oct 16, 2019 at 12:58 AM Dr. David Alan Gilbert
> > > > <dgilbert@redhat.com> wrote:
> > > > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > > > I'm not using virtiofsd.  We have our own server for
> > > > > > crosvm,
> > > > > > which
> > > > > > supports multiple queues.
> > > > > 
> > > > > Ah excellent; is that public anywhere?
> > > > > 
> > > > 
> > > > It's currently under review here:
> > > > https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/1758103
> > > > 
> > > > I expect to have it merged sometime this week.
> > 
> > Oh that's very nice to hear. We have some code from the cloud-
> > hypervisor project where we have a vhost-user-net implementation
> > relying on some sort of a wrapper code similar to what all vhost-
> > user
> > daemon implementations would reuse (libvhost-user kind of thing).
> > 
> > We're planning to see this wrapper code land on Rust-VMM at some
> > point,
> > which is why it would be interesting if you could rebase this
> > virtio-fs
> > daemon code on top of this small library. If you don't have the
> > bandwidth, we could probably look at it if that's something you
> > would
> > be interested in.
> > 
> > Here is the code for the small wrapper crate:
> > https://github.com/intel/cloud-hypervisor/blob/master/vhost_user_backend/src/lib.rs
> > 
> > And here is the vhost-user-net example using it:
> > https://github.com/intel/cloud-hypervisor/blob/master/src/bin/vhost_user_net.rs
> > 
> 
> Our virtio-fs server is not a vhost-user device (this was one of the
> reasons we didn't want to use virtiofsd).  It currently just runs
> directly as a child process of crosvm (like all other crosvm
> devices).
> So unless I'm missing something I don't think there's much to be
> gained by rebasing on vhost_user_backend.

Oh I didn't realize it was not a separate daemon. So yes, from your
perspective, there's no point in rebasing on top of vhost_user_backend.
I still believe it might be interesting for us to share as much code as
possible between projects, which is why we might reuse your code to
create a standalone daemon in Rust.

Thanks,
Sebastien

> 
> Chirantan


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-16 14:14                             ` Stefan Hajnoczi
@ 2019-10-16 16:14                               ` Boeuf, Sebastien
  2019-10-16 18:38                                 ` Stefan Hajnoczi
  0 siblings, 1 reply; 32+ messages in thread
From: Boeuf, Sebastien @ 2019-10-16 16:14 UTC (permalink / raw)
  To: stefanha, chirantan
  Cc: slp, virtio-fs, misono.tomohiro, masayoshi.mizuma, vgoyal

On Wed, 2019-10-16 at 15:14 +0100, Stefan Hajnoczi wrote:
> On Wed, Oct 16, 2019 at 03:21:52AM +0900, Chirantan Ekbote wrote:
> > On Wed, Oct 16, 2019 at 2:28 AM Boeuf, Sebastien
> > <sebastien.boeuf@intel.com> wrote:
> > > Thanks Dave for adding me to the loop here!
> > > 
> > > On Tue, 2019-10-15 at 17:26 +0100, Dr. David Alan Gilbert wrote:
> > > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > > On Wed, Oct 16, 2019 at 12:58 AM Dr. David Alan Gilbert
> > > > > <dgilbert@redhat.com> wrote:
> > > > > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > > > > I'm not using virtiofsd.  We have our own server for
> > > > > > > crosvm,
> > > > > > > which
> > > > > > > supports multiple queues.
> > > > > > 
> > > > > > Ah excellent; is that public anywhere?
> > > > > > 
> > > > > 
> > > > > It's currently under review here:
> > > > > https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/1758103
> > > > > 
> > > > > I expect to have it merged sometime this week.
> > > 
> > > Oh that's very nice to hear. We have some code from the cloud-
> > > hypervisor project where we have a vhost-user-net implementation
> > > relying on some sort of a wrapper code similar to what all vhost-
> > > user
> > > daemon implementations would reuse (libvhost-user kind of thing).
> > > 
> > > We're planning to see this wrapper code land on Rust-VMM at some
> > > point,
> > > which is why it would be interesting if you could rebase this
> > > virtio-fs
> > > daemon code on top of this small library. If you don't have the
> > > bandwidth, we could probably look at it if that's something you
> > > would
> > > be interested in.
> > > 
> > > Here is the code for the small wrapper crate:
> > > https://github.com/intel/cloud-hypervisor/blob/master/vhost_user_backend/src/lib.rs
> > > 
> > > And here is the vhost-user-net example using it:
> > > https://github.com/intel/cloud-hypervisor/blob/master/src/bin/vhost_user_net.rs
> > > 
> > 
> > Our virtio-fs server is not a vhost-user device (this was one of
> > the
> > reasons we didn't want to use virtiofsd).  It currently just runs
> > directly as a child process of crosvm (like all other crosvm
> > devices).
> > So unless I'm missing something I don't think there's much to be
> > gained by rebasing on vhost_user_backend.
> 
> I'm also interested in porting the server over to vhost-user so that
> other VMMs like QEMU and cloud-hypervisor can use it.

Yes I agree that'd be awesome!
Are you planning to start working on this soon? Just asking as I want
to avoid potential duplicated efforts :)

Thanks,
Sebastien
> 
> From a quick look at the code last night, here are things that were
> not
> carried over from virtiofsd:
>  * DAX is not supported
>  * The FUSE protocol version is a bit dated, e.g. copy_file_range()
> is
>    not supported.
>  * Requests are processed synchronously for each virtqueue and
> therefore
>    performance is limited.  The server won't saturate the queue depth
>    supported by the host kernel and hardware.
> 
> Have I missed anything?  Otherwise it looks quite close to
> passthrough_ll.c.
> 
> Stefan


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-16 16:11                             ` Boeuf, Sebastien
@ 2019-10-16 18:37                               ` Stefan Hajnoczi
  0 siblings, 0 replies; 32+ messages in thread
From: Stefan Hajnoczi @ 2019-10-16 18:37 UTC (permalink / raw)
  To: Boeuf, Sebastien
  Cc: slp, virtio-fs, misono.tomohiro, masayoshi.mizuma, vgoyal

[-- Attachment #1: Type: text/plain, Size: 3032 bytes --]

On Wed, Oct 16, 2019 at 04:11:48PM +0000, Boeuf, Sebastien wrote:
> On Wed, 2019-10-16 at 03:21 +0900, Chirantan Ekbote wrote:
> > On Wed, Oct 16, 2019 at 2:28 AM Boeuf, Sebastien
> > <sebastien.boeuf@intel.com> wrote:
> > > Thanks Dave for adding me to the loop here!
> > > 
> > > On Tue, 2019-10-15 at 17:26 +0100, Dr. David Alan Gilbert wrote:
> > > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > > On Wed, Oct 16, 2019 at 12:58 AM Dr. David Alan Gilbert
> > > > > <dgilbert@redhat.com> wrote:
> > > > > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > > > > I'm not using virtiofsd.  We have our own server for
> > > > > > > crosvm,
> > > > > > > which
> > > > > > > supports multiple queues.
> > > > > > 
> > > > > > Ah excellent; is that public anywhere?
> > > > > > 
> > > > > 
> > > > > It's currently under review here:
> > > > > https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/1758103
> > > > > 
> > > > > I expect to have it merged sometime this week.
> > > 
> > > Oh that's very nice to hear. We have some code from the cloud-
> > > hypervisor project where we have a vhost-user-net implementation
> > > relying on some sort of a wrapper code similar to what all vhost-
> > > user
> > > daemon implementations would reuse (libvhost-user kind of thing).
> > > 
> > > We're planning to see this wrapper code land on Rust-VMM at some
> > > point,
> > > which is why it would be interesting if you could rebase this
> > > virtio-fs
> > > daemon code on top of this small library. If you don't have the
> > > bandwidth, we could probably look at it if that's something you
> > > would
> > > be interested in.
> > > 
> > > Here is the code for the small wrapper crate:
> > > https://github.com/intel/cloud-hypervisor/blob/master/vhost_user_backend/src/lib.rs
> > > 
> > > And here is the vhost-user-net example using it:
> > > https://github.com/intel/cloud-hypervisor/blob/master/src/bin/vhost_user_net.rs
> > > 
> > 
> > Our virtio-fs server is not a vhost-user device (this was one of the
> > reasons we didn't want to use virtiofsd).  It currently just runs
> > directly as a child process of crosvm (like all other crosvm
> > devices).
> > So unless I'm missing something I don't think there's much to be
> > gained by rebasing on vhost_user_backend.
> 
> Oh I didn't realize it was not a separate daemon. So yes, from your
> perspective, there's no point in rebasing on top of vhost_user_backend.
> I still believe it might be interesting for us to share as much code as
> possible between projects, which is why we might reuse your code to
> create a standalone daemon in Rust.

I think the server code can be shared and it can live in rust-vmm.

I haven't looked in detail but crosvm just has a slightly different
out-of-process device model from vhost-user, but effectively they are
both doing the same thing.  We'll need a bit of vhost-user glue but the
core server code should be the same.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-16 16:14                               ` Boeuf, Sebastien
@ 2019-10-16 18:38                                 ` Stefan Hajnoczi
  2019-10-17  5:19                                   ` Boeuf, Sebastien
  2019-11-01 22:26                                   ` Boeuf, Sebastien
  0 siblings, 2 replies; 32+ messages in thread
From: Stefan Hajnoczi @ 2019-10-16 18:38 UTC (permalink / raw)
  To: Boeuf, Sebastien
  Cc: slp, virtio-fs, misono.tomohiro, masayoshi.mizuma, vgoyal

[-- Attachment #1: Type: text/plain, Size: 2952 bytes --]

On Wed, Oct 16, 2019 at 04:14:25PM +0000, Boeuf, Sebastien wrote:
> On Wed, 2019-10-16 at 15:14 +0100, Stefan Hajnoczi wrote:
> > On Wed, Oct 16, 2019 at 03:21:52AM +0900, Chirantan Ekbote wrote:
> > > On Wed, Oct 16, 2019 at 2:28 AM Boeuf, Sebastien
> > > <sebastien.boeuf@intel.com> wrote:
> > > > Thanks Dave for adding me to the loop here!
> > > > 
> > > > On Tue, 2019-10-15 at 17:26 +0100, Dr. David Alan Gilbert wrote:
> > > > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > > > On Wed, Oct 16, 2019 at 12:58 AM Dr. David Alan Gilbert
> > > > > > <dgilbert@redhat.com> wrote:
> > > > > > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > > > > > I'm not using virtiofsd.  We have our own server for
> > > > > > > > crosvm,
> > > > > > > > which
> > > > > > > > supports multiple queues.
> > > > > > > 
> > > > > > > Ah excellent; is that public anywhere?
> > > > > > > 
> > > > > > 
> > > > > > It's currently under review here:
> > > > > > https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/1758103
> > > > > > 
> > > > > > I expect to have it merged sometime this week.
> > > > 
> > > > Oh that's very nice to hear. We have some code from the cloud-
> > > > hypervisor project where we have a vhost-user-net implementation
> > > > relying on some sort of a wrapper code similar to what all vhost-
> > > > user
> > > > daemon implementations would reuse (libvhost-user kind of thing).
> > > > 
> > > > We're planning to see this wrapper code land on Rust-VMM at some
> > > > point,
> > > > which is why it would be interesting if you could rebase this
> > > > virtio-fs
> > > > daemon code on top of this small library. If you don't have the
> > > > bandwidth, we could probably look at it if that's something you
> > > > would
> > > > be interested in.
> > > > 
> > > > Here is the code for the small wrapper crate:
> > > > https://github.com/intel/cloud-hypervisor/blob/master/vhost_user_backend/src/lib.rs
> > > > 
> > > > And here is the vhost-user-net example using it:
> > > > https://github.com/intel/cloud-hypervisor/blob/master/src/bin/vhost_user_net.rs
> > > > 
> > > 
> > > Our virtio-fs server is not a vhost-user device (this was one of
> > > the
> > > reasons we didn't want to use virtiofsd).  It currently just runs
> > > directly as a child process of crosvm (like all other crosvm
> > > devices).
> > > So unless I'm missing something I don't think there's much to be
> > > gained by rebasing on vhost_user_backend.
> > 
> > I'm also interested in porting the server over to vhost-user so that
> > other VMMs like QEMU and cloud-hypervisor can use it.
> 
> Yes I agree that'd be awesome!
> Are you planning to start working on this soon? Just asking as I want
> to avoid potential duplicated efforts :)

Not immediately.  If you are ready to work on it and make your branch
public then we can follow along.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-16 14:09                   ` Stefan Hajnoczi
@ 2019-10-16 19:36                     ` Chirantan Ekbote
  2019-10-16 19:44                       ` Vivek Goyal
  2019-10-17  9:23                       ` Stefan Hajnoczi
  0 siblings, 2 replies; 32+ messages in thread
From: Chirantan Ekbote @ 2019-10-16 19:36 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma, Vivek Goyal

On Wed, Oct 16, 2019 at 11:10 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> On Tue, Oct 15, 2019 at 11:58:46PM +0900, Chirantan Ekbote wrote:
> >
> > As for the performance numbers, I don't have my test device with me
> > but if I remember correctly the blogbench scores almost doubled when
> > going from one queue to two queues.
>
> If I understand the code correctly each virtqueue is processed by a
> worker and the number of virtqueues is the vcpu count.  This means that
> requests are processed in a blocking fashion and the disk is unlikely to
> reach its maximum queue depth (e.g. often 64 requests).
>
> Going from 1 virtqueue to 2 virtqueues will improve performance, but
> it's still a far cry from the maximum queue depth that the host kernel
> and hardware supports.  This is why virtiofsd processes multiple
> requests in parallel on a single virtqueue using a thread pool.

Hmm, maybe I'm missing something but isn't this still limited by the
number of requests that can fit in the virtqueue?  Unlike virtio-net
there is no separate queue for sending data back from host -> guest.
Each request also includes a buffer for the response so once the
virtqueue is full all other requests block in the guest until a
request is completed and can be removed from the queue.  So even in a
thread pool model, going from 1 virtqueue to 2 virtqueues would double
the number of requests that can be handled in parallel.

>
> Stefan


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-16 19:36                     ` Chirantan Ekbote
@ 2019-10-16 19:44                       ` Vivek Goyal
  2019-10-17  9:23                       ` Stefan Hajnoczi
  1 sibling, 0 replies; 32+ messages in thread
From: Vivek Goyal @ 2019-10-16 19:44 UTC (permalink / raw)
  To: Chirantan Ekbote; +Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma

On Thu, Oct 17, 2019 at 04:36:33AM +0900, Chirantan Ekbote wrote:
> On Wed, Oct 16, 2019 at 11:10 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >
> > On Tue, Oct 15, 2019 at 11:58:46PM +0900, Chirantan Ekbote wrote:
> > >
> > > As for the performance numbers, I don't have my test device with me
> > > but if I remember correctly the blogbench scores almost doubled when
> > > going from one queue to two queues.
> >
> > If I understand the code correctly each virtqueue is processed by a
> > worker and the number of virtqueues is the vcpu count.  This means that
> > requests are processed in a blocking fashion and the disk is unlikely to
> > reach its maximum queue depth (e.g. often 64 requests).
> >
> > Going from 1 virtqueue to 2 virtqueues will improve performance, but
> > it's still a far cry from the maximum queue depth that the host kernel
> > and hardware supports.  This is why virtiofsd processes multiple
> > requests in parallel on a single virtqueue using a thread pool.
> 
> Hmm, maybe I'm missing something but isn't this still limited by the
> number of requests that can fit in the virtqueue?  Unlike virtio-net
> there is no separate queue for sending data back from host -> guest.
> Each request also includes a buffer for the response so once the
> virtqueue is full all other requests block in the guest until a
> request is completed and can be removed from the queue.  So even in a
> thread pool model, going from 1 virtqueue to 2 virtqueues would double
> the number of requests that can be handled in parallel.

Yes. But default queue size is big enough to pack good number of requests.
IIUC, it is 1024 on my VM. So 64 threads can do I/O on these 1024
requests.

But having multiple queue can make it even better and allow more I/O to
make progress in parallel.

We do have plans to implement another queue for notifying guest and
send notifications for things like lock has been acquired, possibly
metadata change notifications as well. 

Thanks
Vivek


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-16 18:38                                 ` Stefan Hajnoczi
@ 2019-10-17  5:19                                   ` Boeuf, Sebastien
  2019-11-01 22:26                                   ` Boeuf, Sebastien
  1 sibling, 0 replies; 32+ messages in thread
From: Boeuf, Sebastien @ 2019-10-17  5:19 UTC (permalink / raw)
  To: stefanha; +Cc: slp, virtio-fs, misono.tomohiro, masayoshi.mizuma, vgoyal

On Wed, 2019-10-16 at 19:38 +0100, Stefan Hajnoczi wrote:
> On Wed, Oct 16, 2019 at 04:14:25PM +0000, Boeuf, Sebastien wrote:
> > On Wed, 2019-10-16 at 15:14 +0100, Stefan Hajnoczi wrote:
> > > On Wed, Oct 16, 2019 at 03:21:52AM +0900, Chirantan Ekbote wrote:
> > > > On Wed, Oct 16, 2019 at 2:28 AM Boeuf, Sebastien
> > > > <sebastien.boeuf@intel.com> wrote:
> > > > > Thanks Dave for adding me to the loop here!
> > > > > 
> > > > > On Tue, 2019-10-15 at 17:26 +0100, Dr. David Alan Gilbert
> > > > > wrote:
> > > > > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > > > > On Wed, Oct 16, 2019 at 12:58 AM Dr. David Alan Gilbert
> > > > > > > <dgilbert@redhat.com> wrote:
> > > > > > > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > > > > > > I'm not using virtiofsd.  We have our own server for
> > > > > > > > > crosvm,
> > > > > > > > > which
> > > > > > > > > supports multiple queues.
> > > > > > > > 
> > > > > > > > Ah excellent; is that public anywhere?
> > > > > > > > 
> > > > > > > 
> > > > > > > It's currently under review here:
> > > > > > > https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/1758103
> > > > > > > 
> > > > > > > I expect to have it merged sometime this week.
> > > > > 
> > > > > Oh that's very nice to hear. We have some code from the
> > > > > cloud-
> > > > > hypervisor project where we have a vhost-user-net
> > > > > implementation
> > > > > relying on some sort of a wrapper code similar to what all
> > > > > vhost-
> > > > > user
> > > > > daemon implementations would reuse (libvhost-user kind of
> > > > > thing).
> > > > > 
> > > > > We're planning to see this wrapper code land on Rust-VMM at
> > > > > some
> > > > > point,
> > > > > which is why it would be interesting if you could rebase this
> > > > > virtio-fs
> > > > > daemon code on top of this small library. If you don't have
> > > > > the
> > > > > bandwidth, we could probably look at it if that's something
> > > > > you
> > > > > would
> > > > > be interested in.
> > > > > 
> > > > > Here is the code for the small wrapper crate:
> > > > > https://github.com/intel/cloud-hypervisor/blob/master/vhost_user_backend/src/lib.rs
> > > > > 
> > > > > And here is the vhost-user-net example using it:
> > > > > https://github.com/intel/cloud-hypervisor/blob/master/src/bin/vhost_user_net.rs
> > > > > 
> > > > 
> > > > Our virtio-fs server is not a vhost-user device (this was one
> > > > of
> > > > the
> > > > reasons we didn't want to use virtiofsd).  It currently just
> > > > runs
> > > > directly as a child process of crosvm (like all other crosvm
> > > > devices).
> > > > So unless I'm missing something I don't think there's much to
> > > > be
> > > > gained by rebasing on vhost_user_backend.
> > > 
> > > I'm also interested in porting the server over to vhost-user so
> > > that
> > > other VMMs like QEMU and cloud-hypervisor can use it.
> > 
> > Yes I agree that'd be awesome!
> > Are you planning to start working on this soon? Just asking as I
> > want
> > to avoid potential duplicated efforts :)
> 
> Not immediately.  If you are ready to work on it and make your branch
> public then we can follow along.

Unfortunately, I don't have free cycles right now either. I'll ask
internally if we can have someone looking at it, and also I might have
some time later next week.

I'll keep you posted!

Thanks,
Sebastien

> 
> Stefan


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-16 19:36                     ` Chirantan Ekbote
  2019-10-16 19:44                       ` Vivek Goyal
@ 2019-10-17  9:23                       ` Stefan Hajnoczi
  1 sibling, 0 replies; 32+ messages in thread
From: Stefan Hajnoczi @ 2019-10-17  9:23 UTC (permalink / raw)
  To: Chirantan Ekbote
  Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma, Vivek Goyal

[-- Attachment #1: Type: text/plain, Size: 1695 bytes --]

On Thu, Oct 17, 2019 at 04:36:33AM +0900, Chirantan Ekbote wrote:
> On Wed, Oct 16, 2019 at 11:10 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >
> > On Tue, Oct 15, 2019 at 11:58:46PM +0900, Chirantan Ekbote wrote:
> > >
> > > As for the performance numbers, I don't have my test device with me
> > > but if I remember correctly the blogbench scores almost doubled when
> > > going from one queue to two queues.
> >
> > If I understand the code correctly each virtqueue is processed by a
> > worker and the number of virtqueues is the vcpu count.  This means that
> > requests are processed in a blocking fashion and the disk is unlikely to
> > reach its maximum queue depth (e.g. often 64 requests).
> >
> > Going from 1 virtqueue to 2 virtqueues will improve performance, but
> > it's still a far cry from the maximum queue depth that the host kernel
> > and hardware supports.  This is why virtiofsd processes multiple
> > requests in parallel on a single virtqueue using a thread pool.
> 
> Hmm, maybe I'm missing something but isn't this still limited by the
> number of requests that can fit in the virtqueue?

Yes, but not a practical issue.  The maximum virtqueue size is chosen by
the VIRTIO device.  QEMU sets it to 128 and that's larger than the
typical queue depth of a disk.  So in practice the virtqueue size is not
the bottleneck.

It makes sense to set the number of virtqueues to the number of vCPUs to
achieve good CPU and interrupt scalability, but this number will be
lower than the disk's queue depth.  So in order to saturate the disk
it's necessary to process a virtqueue's requests in parallel (e.g.
thread pool, aio, etc).

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-16 18:38                                 ` Stefan Hajnoczi
  2019-10-17  5:19                                   ` Boeuf, Sebastien
@ 2019-11-01 22:26                                   ` Boeuf, Sebastien
  2019-11-12  9:45                                     ` Stefan Hajnoczi
  1 sibling, 1 reply; 32+ messages in thread
From: Boeuf, Sebastien @ 2019-11-01 22:26 UTC (permalink / raw)
  To: stefanha, dgilbert, Ortiz, Samuel, Bradford, Robert
  Cc: slp, virtio-fs, misono.tomohiro, masayoshi.mizuma, vgoyal

+Samuel
+Rob

Hi Stefan, Dave,

I had some time to get started on the virtiofsd in Rust, based on the
code from Chirantan. So basically I relied on the following Crosvm
branch (code not merged yet):
https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/1705654

>From there I have ported the code that was needed to rely on the nice
implementation from Chirantan. I had to make it fit the vm-virtio crate
from Cloud-Hypervisor (since it's not yet complete on rust-vmm), and
also the vm-memory crate from rust-vmm.

Once the porting done, I created a dedicated vhost-user-fs daemon
binary relying on the vhost_user_backend crate. I connected all the
dots together and the result is here:
https://github.com/intel/cloud-hypervisor/pull/404

I have listed the remaining tasks here: 
https://github.com/intel/cloud-hypervisor/pull/404#issue-335636924
(some can be quite big and should come as a follow up PR IMO).

I'll be in vacation next week, but if you have some time, it'd be very
nice to get everybody's feedback on this.

And of course, if you have more time to continue working on this, feel
free to reuse my branch 'vhost_user_fs_daemon' from my forked repo 
https://github.com/sboeuf/cloud-hypervisor-1.git

Thanks,
Sebastien


On Wed, 2019-10-16 at 19:38 +0100, Stefan Hajnoczi wrote:
> On Wed, Oct 16, 2019 at 04:14:25PM +0000, Boeuf, Sebastien wrote:
> > On Wed, 2019-10-16 at 15:14 +0100, Stefan Hajnoczi wrote:
> > > On Wed, Oct 16, 2019 at 03:21:52AM +0900, Chirantan Ekbote wrote:
> > > > On Wed, Oct 16, 2019 at 2:28 AM Boeuf, Sebastien
> > > > <sebastien.boeuf@intel.com> wrote:
> > > > > Thanks Dave for adding me to the loop here!
> > > > > 
> > > > > On Tue, 2019-10-15 at 17:26 +0100, Dr. David Alan Gilbert
> > > > > wrote:
> > > > > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > > > > On Wed, Oct 16, 2019 at 12:58 AM Dr. David Alan Gilbert
> > > > > > > <dgilbert@redhat.com> wrote:
> > > > > > > > * Chirantan Ekbote (chirantan@chromium.org) wrote:
> > > > > > > > > I'm not using virtiofsd.  We have our own server for
> > > > > > > > > crosvm,
> > > > > > > > > which
> > > > > > > > > supports multiple queues.
> > > > > > > > 
> > > > > > > > Ah excellent; is that public anywhere?
> > > > > > > > 
> > > > > > > 
> > > > > > > It's currently under review here:
> > > > > > > https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/1758103
> > > > > > > 
> > > > > > > I expect to have it merged sometime this week.
> > > > > 
> > > > > Oh that's very nice to hear. We have some code from the
> > > > > cloud-
> > > > > hypervisor project where we have a vhost-user-net
> > > > > implementation
> > > > > relying on some sort of a wrapper code similar to what all
> > > > > vhost-
> > > > > user
> > > > > daemon implementations would reuse (libvhost-user kind of
> > > > > thing).
> > > > > 
> > > > > We're planning to see this wrapper code land on Rust-VMM at
> > > > > some
> > > > > point,
> > > > > which is why it would be interesting if you could rebase this
> > > > > virtio-fs
> > > > > daemon code on top of this small library. If you don't have
> > > > > the
> > > > > bandwidth, we could probably look at it if that's something
> > > > > you
> > > > > would
> > > > > be interested in.
> > > > > 
> > > > > Here is the code for the small wrapper crate:
> > > > > https://github.com/intel/cloud-hypervisor/blob/master/vhost_user_backend/src/lib.rs
> > > > > 
> > > > > And here is the vhost-user-net example using it:
> > > > > https://github.com/intel/cloud-hypervisor/blob/master/src/bin/vhost_user_net.rs
> > > > > 
> > > > 
> > > > Our virtio-fs server is not a vhost-user device (this was one
> > > > of
> > > > the
> > > > reasons we didn't want to use virtiofsd).  It currently just
> > > > runs
> > > > directly as a child process of crosvm (like all other crosvm
> > > > devices).
> > > > So unless I'm missing something I don't think there's much to
> > > > be
> > > > gained by rebasing on vhost_user_backend.
> > > 
> > > I'm also interested in porting the server over to vhost-user so
> > > that
> > > other VMMs like QEMU and cloud-hypervisor can use it.
> > 
> > Yes I agree that'd be awesome!
> > Are you planning to start working on this soon? Just asking as I
> > want
> > to avoid potential duplicated efforts :)
> 
> Not immediately.  If you are ready to work on it and make your branch
> public then we can follow along.
> 
> Stefan


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-10-07 12:11 [Virtio-fs] xfstest results for virtio-fs on aarch64 qi.fuli
  2019-10-07 14:34 ` Dr. David Alan Gilbert
@ 2019-11-06 12:13 ` Dr. David Alan Gilbert
  2019-11-07  8:03   ` misono.tomohiro
  1 sibling, 1 reply; 32+ messages in thread
From: Dr. David Alan Gilbert @ 2019-11-06 12:13 UTC (permalink / raw)
  To: qi.fuli; +Cc: virtio-fs, misono.tomohiro, masayoshi.mizuma

* qi.fuli@fujitsu.com (qi.fuli@fujitsu.com) wrote:
> Hello,
> 
> We have run the generic tests of xfstest for virtio-fs[1] on aarch64[2],
> here we selected some tests that did not run or failed to run,
> and we categorized them, basing on the reasons in our understanding.

Hi,
  Apologies for taking a while to respond.
What I've done is to repeat these and create gitlab issues for each
categroy that you defined.  Now we can go and look at these
inidividually.

>   * Category 1: generic/003, generic/192
>     Error: access time error
>     Reason: file_accessed() not run

https://gitlab.com/virtio-fs/qemu/issues/8

>   * Category 2: generic/089, generic/478, generic/484, generic/504
>     Error: lock error

https://gitlab.com/virtio-fs/qemu/issues/9

>   * Category 3: generic/426, generic/467, generic/477
>     Error: open_by_handle error

https://gitlab.com/virtio-fs/qemu/issues/10

>   * Category 4: generic/551
>     Error: kvm panic

This for me just caused an OOM; when I increased the guest size to 32G
it was fine; can you retest?  I'm not sure how much RAM xfstests
expects.

>   * Category 5: generic/011, generic/013
>     Error: cannot remove file
>     Reason: NFS backend

https://gitlab.com/virtio-fs/qemu/issues/11

>   * Category 6: generic/035
>     Error: nlink is 1, should be 0

https://gitlab.com/virtio-fs/qemu/issues/12

>   * Category 7: generic/125, generic/193, generic/314
>     Error: open/chown/mkdir permission error

There were OK for me after I'd created the users/groups that
the xfstests manual asked for (fsgqa, and 123456-fsgqa) - can
you confirm the error you saw.

>   * Category 8: generic/469
>     Error: fallocate keep_size is needed

https://gitlab.com/virtio-fs/qemu/issues/13

>     Reason: NFS4.0 backend
>   * Category 9: generic/323
>     Error: system hang
>     Reason: fd is close before AIO finished

https://gitlab.com/virtio-fs/qemu/issues/14

I reproduced all these on x86, so they're not aarch
specific.

> We would like to know if virtio-fs does not support these tests in
> the specification or they are bugs that need to be fixed.
> It would be very appreciated if anyone could give some comments.

Now I've got them reproduced easily and also on x86, we'll
dig into them and figure out what's going on.

Dave

> [1] qemu: https://gitlab.com/virtio-fs/qemu/tree/virtio-fs-dev
>      start qemu script:
>      $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu1 -o 
> source=/root/virtio-fs/test1/ -o cache=always -o xattr -o flock -d &
>      $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu2 -o 
> source=/root/virtio-fs/test2/ -o cache=always -o xattr -o flock -d &
>      $QEMU -M virt,accel=kvm,gic_version=3 \
>          -cpu host \
>          -smp 8 \
>          -m 8192\
>          -nographic \
>          -serial mon:stdio \
>          -netdev tap,id=net0 -device 
> virtio-net-pci,netdev=net0,id=net0,mac=XX:XX:XX:XX:XX:XX \
>          -object 
> memory-backend-file,id=mem,size=8G,mem-path=/dev/shm,share=on \
>          -numa node,memdev=mem \
>          -drive 
> file=/root/virtio-fs/AAVMF/AAVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on 
> \
>          -drive file=$VARS,if=pflash,format=raw,unit=1 \
>          -chardev socket,id=char1,path=/tmp/vhostqemu1 \
>          -device 
> vhost-user-fs-pci,queue-size=1024,chardev=char1,tag=myfs1,cache-size=0 \
>          -chardev socket,id=char2,path=/tmp/vhostqemu2 \
>          -device 
> vhost-user-fs-pci,queue-size=1024,chardev=char2,tag=myfs2,cache-size=0 \
>          -drive if=virtio,file=/var/lib/libvirt/images/guest.img
> 
> [2] host kernel: 4.18.0-80.4.2.el8_0.aarch64
>      guest kernel: 5.4-rc1
>      Arch: Arm64
>      backend: NFS 4.0
> 
> Thanks,
> QI Fuli
> 
> _______________________________________________
> Virtio-fs mailing list
> Virtio-fs@redhat.com
> https://www.redhat.com/mailman/listinfo/virtio-fs
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-11-06 12:13 ` Dr. David Alan Gilbert
@ 2019-11-07  8:03   ` misono.tomohiro
  0 siblings, 0 replies; 32+ messages in thread
From: misono.tomohiro @ 2019-11-07  8:03 UTC (permalink / raw)
  To: 'Dr. David Alan Gilbert', qi.fuli; +Cc: virtio-fs, masayoshi.mizuma

> > Hello,
> >
> > We have run the generic tests of xfstest for virtio-fs[1] on
> > aarch64[2], here we selected some tests that did not run or failed to
> > run, and we categorized them, basing on the reasons in our understanding.
> 
> Hi,
>   Apologies for taking a while to respond.
> What I've done is to repeat these and create gitlab issues for each categroy that you defined.  Now we can go and look at these
> inidividually.

Hi,

Thanks for making gitlab issues. I have collected some clues so I will comment on each issue.

Misono

> 
> >   * Category 1: generic/003, generic/192
> >     Error: access time error
> >     Reason: file_accessed() not run
> 
> https://gitlab.com/virtio-fs/qemu/issues/8
> 
> >   * Category 2: generic/089, generic/478, generic/484, generic/504
> >     Error: lock error
> 
> https://gitlab.com/virtio-fs/qemu/issues/9
> 
> >   * Category 3: generic/426, generic/467, generic/477
> >     Error: open_by_handle error
> 
> https://gitlab.com/virtio-fs/qemu/issues/10
> 
> >   * Category 4: generic/551
> >     Error: kvm panic
> 
> This for me just caused an OOM; when I increased the guest size to 32G it was fine; can you retest?  I'm not sure how much RAM
> xfstests expects.
> 
> >   * Category 5: generic/011, generic/013
> >     Error: cannot remove file
> >     Reason: NFS backend
> 
> https://gitlab.com/virtio-fs/qemu/issues/11
> 
> >   * Category 6: generic/035
> >     Error: nlink is 1, should be 0
> 
> https://gitlab.com/virtio-fs/qemu/issues/12
> 
> >   * Category 7: generic/125, generic/193, generic/314
> >     Error: open/chown/mkdir permission error
> 
> There were OK for me after I'd created the users/groups that the xfstests manual asked for (fsgqa, and 123456-fsgqa) - can you
> confirm the error you saw.
> 
> >   * Category 8: generic/469
> >     Error: fallocate keep_size is needed
> 
> https://gitlab.com/virtio-fs/qemu/issues/13
> 
> >     Reason: NFS4.0 backend
> >   * Category 9: generic/323
> >     Error: system hang
> >     Reason: fd is close before AIO finished
> 
> https://gitlab.com/virtio-fs/qemu/issues/14
> 
> I reproduced all these on x86, so they're not aarch specific.
> 
> > We would like to know if virtio-fs does not support these tests in the
> > specification or they are bugs that need to be fixed.
> > It would be very appreciated if anyone could give some comments.
> 
> Now I've got them reproduced easily and also on x86, we'll dig into them and figure out what's going on.
> 
> Dave
> 
> > [1] qemu: https://gitlab.com/virtio-fs/qemu/tree/virtio-fs-dev
> >      start qemu script:
> >      $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu1 -o
> > source=/root/virtio-fs/test1/ -o cache=always -o xattr -o flock -d &
> >      $VIRTIOFSD -o vhost_user_socket=/tmp/vhostqemu2 -o
> > source=/root/virtio-fs/test2/ -o cache=always -o xattr -o flock -d &
> >      $QEMU -M virt,accel=kvm,gic_version=3 \
> >          -cpu host \
> >          -smp 8 \
> >          -m 8192\
> >          -nographic \
> >          -serial mon:stdio \
> >          -netdev tap,id=net0 -device
> > virtio-net-pci,netdev=net0,id=net0,mac=XX:XX:XX:XX:XX:XX \
> >          -object
> > memory-backend-file,id=mem,size=8G,mem-path=/dev/shm,share=on \
> >          -numa node,memdev=mem \
> >          -drive
> > file=/root/virtio-fs/AAVMF/AAVMF_CODE.fd,if=pflash,format=raw,unit=0,r
> > eadonly=on
> > \
> >          -drive file=$VARS,if=pflash,format=raw,unit=1 \
> >          -chardev socket,id=char1,path=/tmp/vhostqemu1 \
> >          -device
> > vhost-user-fs-pci,queue-size=1024,chardev=char1,tag=myfs1,cache-size=0 \
> >          -chardev socket,id=char2,path=/tmp/vhostqemu2 \
> >          -device
> > vhost-user-fs-pci,queue-size=1024,chardev=char2,tag=myfs2,cache-size=0 \
> >          -drive if=virtio,file=/var/lib/libvirt/images/guest.img
> >
> > [2] host kernel: 4.18.0-80.4.2.el8_0.aarch64
> >      guest kernel: 5.4-rc1
> >      Arch: Arm64
> >      backend: NFS 4.0
> >
> > Thanks,
> > QI Fuli
> >
> > _______________________________________________
> > Virtio-fs mailing list
> > Virtio-fs@redhat.com
> > https://www.redhat.com/mailman/listinfo/virtio-fs
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-11-01 22:26                                   ` Boeuf, Sebastien
@ 2019-11-12  9:45                                     ` Stefan Hajnoczi
       [not found]                                       ` <5238b860a8d544e59c9a827fbc26418d53482ccf.camel@intel.com>
  0 siblings, 1 reply; 32+ messages in thread
From: Stefan Hajnoczi @ 2019-11-12  9:45 UTC (permalink / raw)
  To: Boeuf, Sebastien
  Cc: misono.tomohiro, Bradford, Robert, virtio-fs, masayoshi.mizuma,
	Ortiz, Samuel, vgoyal

[-- Attachment #1: Type: text/plain, Size: 1417 bytes --]

On Fri, Nov 01, 2019 at 10:26:54PM +0000, Boeuf, Sebastien wrote:
> +Samuel
> +Rob
> 
> Hi Stefan, Dave,
> 
> I had some time to get started on the virtiofsd in Rust, based on the
> code from Chirantan. So basically I relied on the following Crosvm
> branch (code not merged yet):
> https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/1705654
> 
> From there I have ported the code that was needed to rely on the nice
> implementation from Chirantan. I had to make it fit the vm-virtio crate
> from Cloud-Hypervisor (since it's not yet complete on rust-vmm), and
> also the vm-memory crate from rust-vmm.
> 
> Once the porting done, I created a dedicated vhost-user-fs daemon
> binary relying on the vhost_user_backend crate. I connected all the
> dots together and the result is here:
> https://github.com/intel/cloud-hypervisor/pull/404
> 
> I have listed the remaining tasks here: 
> https://github.com/intel/cloud-hypervisor/pull/404#issue-335636924
> (some can be quite big and should come as a follow up PR IMO).
> 
> I'll be in vacation next week, but if you have some time, it'd be very
> nice to get everybody's feedback on this.
> 
> And of course, if you have more time to continue working on this, feel
> free to reuse my branch 'vhost_user_fs_daemon' from my forked repo 
> https://github.com/sboeuf/cloud-hypervisor-1.git

Awesome, thanks!

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
       [not found]                                       ` <5238b860a8d544e59c9a827fbc26418d53482ccf.camel@intel.com>
@ 2019-11-13 11:39                                         ` Stefan Hajnoczi
  2019-11-20 16:53                                           ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 32+ messages in thread
From: Stefan Hajnoczi @ 2019-11-13 11:39 UTC (permalink / raw)
  To: Boeuf, Sebastien
  Cc: misono.tomohiro, Bradford, Robert, virtio-fs, Ortiz, Samuel,
	masayoshi.mizuma, vgoyal

[-- Attachment #1: Type: text/plain, Size: 3215 bytes --]

On Wed, Nov 13, 2019 at 06:40:54AM +0000, Boeuf, Sebastien wrote:
> On Tue, 2019-11-12 at 09:45 +0000, Stefan Hajnoczi wrote:
> > On Fri, Nov 01, 2019 at 10:26:54PM +0000, Boeuf, Sebastien wrote:
> > > +Samuel
> > > +Rob
> > > 
> > > Hi Stefan, Dave,
> > > 
> > > I had some time to get started on the virtiofsd in Rust, based on
> > > the
> > > code from Chirantan. So basically I relied on the following Crosvm
> > > branch (code not merged yet):
> > > https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/1705654
> > > 
> > > From there I have ported the code that was needed to rely on the
> > > nice
> > > implementation from Chirantan. I had to make it fit the vm-virtio
> > > crate
> > > from Cloud-Hypervisor (since it's not yet complete on rust-vmm),
> > > and
> > > also the vm-memory crate from rust-vmm.
> > > 
> > > Once the porting done, I created a dedicated vhost-user-fs daemon
> > > binary relying on the vhost_user_backend crate. I connected all the
> > > dots together and the result is here:
> > > https://github.com/intel/cloud-hypervisor/pull/404
> > > 
> > > I have listed the remaining tasks here: 
> > > https://github.com/intel/cloud-hypervisor/pull/404#issue-335636924
> > > (some can be quite big and should come as a follow up PR IMO).
> > > 
> > > I'll be in vacation next week, but if you have some time, it'd be
> > > very
> > > nice to get everybody's feedback on this.
> > > 
> > > And of course, if you have more time to continue working on this,
> > > feel
> > > free to reuse my branch 'vhost_user_fs_daemon' from my forked repo 
> > > https://github.com/sboeuf/cloud-hypervisor-1.git
> > 
> > Awesome, thanks!
> 
> I've been trying to make this work today, unfortunately I'm running
> into some issues. Basically what happens is that I don't get a writable
> descriptor, which means the reply to the FUSE init() cannot be sent.
> 
> Chirantan, I was wondering if your code in Crosvm is working properly
> with virtio-fs? And if that's the case, have you ever get the problem
> I'm describing where the virtio descriptor does not have the write only
> flag, which prevents the Writer from being provisioned with buffers.
> 
> Stefan, do you know if the virtio-fs driver actually tags some
> descriptor as write_only? I'm trying to understand what is missing
> here.

A vring descriptor is either driver->device ("out") or device->driver
("in").

A virtio-fs request typically contains both descriptor types because it
consists of a request (e.g. struct fuse_in_header + struct fuse_init_in)
and a response (e.g.  struct fuse_out_header + struct fuse_init_out).

By the way, VIRTIO and FUSE use the terms "in"/"out" in the opposite
sense.  VIRTIO "out" is driver->device and "in" is device->driver.  FUSE
"out" is device->driver and "in" is driver->device.  Just wanted to
point that out to prevent confusion :-).

If you are only seeing a driver->device descriptor but not the
device->driver descriptor, then something is wrong with the vring
processing code in the device.

The guest driver places requests into the virtqueue in
fs/fuse/virtio_fs.c:virtio_fs_enqueue_req().

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Virtio-fs] xfstest results for virtio-fs on aarch64
  2019-11-13 11:39                                         ` Stefan Hajnoczi
@ 2019-11-20 16:53                                           ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 32+ messages in thread
From: Dr. David Alan Gilbert @ 2019-11-20 16:53 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: misono.tomohiro, Bradford, Robert, virtio-fs, Ortiz, Samuel,
	masayoshi.mizuma, vgoyal

* Stefan Hajnoczi (stefanha@redhat.com) wrote:
> On Wed, Nov 13, 2019 at 06:40:54AM +0000, Boeuf, Sebastien wrote:
> > On Tue, 2019-11-12 at 09:45 +0000, Stefan Hajnoczi wrote:
> > > On Fri, Nov 01, 2019 at 10:26:54PM +0000, Boeuf, Sebastien wrote:
> > > > +Samuel
> > > > +Rob
> > > > 
> > > > Hi Stefan, Dave,
> > > > 
> > > > I had some time to get started on the virtiofsd in Rust, based on
> > > > the
> > > > code from Chirantan. So basically I relied on the following Crosvm
> > > > branch (code not merged yet):
> > > > https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/1705654
> > > > 
> > > > From there I have ported the code that was needed to rely on the
> > > > nice
> > > > implementation from Chirantan. I had to make it fit the vm-virtio
> > > > crate
> > > > from Cloud-Hypervisor (since it's not yet complete on rust-vmm),
> > > > and
> > > > also the vm-memory crate from rust-vmm.
> > > > 
> > > > Once the porting done, I created a dedicated vhost-user-fs daemon
> > > > binary relying on the vhost_user_backend crate. I connected all the
> > > > dots together and the result is here:
> > > > https://github.com/intel/cloud-hypervisor/pull/404
> > > > 
> > > > I have listed the remaining tasks here: 
> > > > https://github.com/intel/cloud-hypervisor/pull/404#issue-335636924
> > > > (some can be quite big and should come as a follow up PR IMO).
> > > > 
> > > > I'll be in vacation next week, but if you have some time, it'd be
> > > > very
> > > > nice to get everybody's feedback on this.
> > > > 
> > > > And of course, if you have more time to continue working on this,
> > > > feel
> > > > free to reuse my branch 'vhost_user_fs_daemon' from my forked repo 
> > > > https://github.com/sboeuf/cloud-hypervisor-1.git
> > > 
> > > Awesome, thanks!
> > 
> > I've been trying to make this work today, unfortunately I'm running
> > into some issues. Basically what happens is that I don't get a writable
> > descriptor, which means the reply to the FUSE init() cannot be sent.
> > 
> > Chirantan, I was wondering if your code in Crosvm is working properly
> > with virtio-fs? And if that's the case, have you ever get the problem
> > I'm describing where the virtio descriptor does not have the write only
> > flag, which prevents the Writer from being provisioned with buffers.
> > 
> > Stefan, do you know if the virtio-fs driver actually tags some
> > descriptor as write_only? I'm trying to understand what is missing
> > here.
> 
> A vring descriptor is either driver->device ("out") or device->driver
> ("in").
> 
> A virtio-fs request typically contains both descriptor types because it
> consists of a request (e.g. struct fuse_in_header + struct fuse_init_in)
> and a response (e.g.  struct fuse_out_header + struct fuse_init_out).
> 
> By the way, VIRTIO and FUSE use the terms "in"/"out" in the opposite
> sense.  VIRTIO "out" is driver->device and "in" is device->driver.  FUSE
> "out" is device->driver and "in" is driver->device.  Just wanted to
> point that out to prevent confusion :-).

Yes, it's pretty confusing; I'd got into the habit of adding comments
where ever I'm using in/out; e.g.:
   '/* out is from guest, in is too guest */'
in the corresponding C code.

Dave

> If you are only seeing a driver->device descriptor but not the
> device->driver descriptor, then something is wrong with the vring
> processing code in the device.
> 
> The guest driver places requests into the virtqueue in
> fs/fuse/virtio_fs.c:virtio_fs_enqueue_req().
> 
> Stefan


--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2019-11-20 16:53 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-07 12:11 [Virtio-fs] xfstest results for virtio-fs on aarch64 qi.fuli
2019-10-07 14:34 ` Dr. David Alan Gilbert
2019-10-09 16:51   ` Dr. David Alan Gilbert
2019-10-10  9:57     ` qi.fuli
2019-10-11  9:21       ` Dr. David Alan Gilbert
2019-10-11 18:45       ` Chirantan Ekbote
2019-10-11 19:59         ` Vivek Goyal
2019-10-11 20:13           ` Chirantan Ekbote
2019-10-11 20:36             ` Vivek Goyal
2019-10-14  9:11               ` Stefan Hajnoczi
2019-10-15 14:58                 ` Chirantan Ekbote
2019-10-15 15:57                   ` Dr. David Alan Gilbert
2019-10-15 16:11                     ` Chirantan Ekbote
2019-10-15 16:26                       ` Dr. David Alan Gilbert
2019-10-15 17:28                         ` Boeuf, Sebastien
2019-10-15 18:21                           ` Chirantan Ekbote
2019-10-16 14:14                             ` Stefan Hajnoczi
2019-10-16 16:14                               ` Boeuf, Sebastien
2019-10-16 18:38                                 ` Stefan Hajnoczi
2019-10-17  5:19                                   ` Boeuf, Sebastien
2019-11-01 22:26                                   ` Boeuf, Sebastien
2019-11-12  9:45                                     ` Stefan Hajnoczi
     [not found]                                       ` <5238b860a8d544e59c9a827fbc26418d53482ccf.camel@intel.com>
2019-11-13 11:39                                         ` Stefan Hajnoczi
2019-11-20 16:53                                           ` Dr. David Alan Gilbert
2019-10-16 16:11                             ` Boeuf, Sebastien
2019-10-16 18:37                               ` Stefan Hajnoczi
2019-10-16 14:09                   ` Stefan Hajnoczi
2019-10-16 19:36                     ` Chirantan Ekbote
2019-10-16 19:44                       ` Vivek Goyal
2019-10-17  9:23                       ` Stefan Hajnoczi
2019-11-06 12:13 ` Dr. David Alan Gilbert
2019-11-07  8:03   ` misono.tomohiro

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.