All of lore.kernel.org
 help / color / mirror / Atom feed
* tests/acceptance/multiprocess.py test failure
@ 2021-07-15  1:59 Cleber Rosa
  2021-07-15  8:16 ` David Hildenbrand
  0 siblings, 1 reply; 7+ messages in thread
From: Cleber Rosa @ 2021-07-15  1:59 UTC (permalink / raw)
  To: qemu-devel, David Hildenbrand, Paolo Bonzini, Elena Ufimtseva,
	John G Johnson, Jagannathan Raman, Willian Rampazzo

Hi everyone,

The tests/acceptance/multiprocess.py:Multiprocess.test_multiprocess_x86_64
is currently failing (as of a9649a719a44894b81f38dc1c5c1888ee684acef).
Unfortunately CI was unable to catch this issue earlier, because tests
that require KVM are not yet running (but this should change soon).
The relevant part of the test logs is:

VM launch command: './qemu-system-x86_64 -display none -vga none
-chardev socket,id=mon,path=/var/tmp/avo_qemu_sock_5g22rvrp/qemu-427815-monitor.sock
-mon chardev=mon,mode=control -chardev
socket,id=console,path=/var/tmp/avo_qemu_sock_5g22rvrp/qemu-427815-console.sock,server=on,wait=off
-serial chardev:console -machine pc -accel kvm -cpu host -object
memory-backend-memfd,id=sysmem-file,size=2G --numa
node,memdev=sysmem-file -m 2048 -kernel
/home/cleber/avocado/data/cache/by_location/b4c64f15a75b083966d39d9246dd8db177736bb4/vmlinuz
-initrd /home/cleber/avocado/data/cache/by_location/b4c64f15a75b083966d39d9246dd8db177736bb4/initrd.img
-append printk.time=0 console=ttyS0 rdinit=/bin/bash -device
x-pci-proxy-dev,id=lsi1,fd=16'
>>> {'execute': 'qmp_capabilities'}

The test remains stuck here for as long as the test is allowed to run.
Because there's currently no timeout in the test, it can remain stuck
forever.  But, with a timeout, we end up getting:

Error launching VM
Command: './qemu-system-x86_64 -display none -vga none -chardev
socket,id=mon,path=/var/tmp/avo_qemu_sock_5g22rvrp/qemu-427815-monitor.sock
-mon chardev=mon,mode=control -chardev
socket,id=console,path=/var/tmp/avo_qemu_sock_5g22rvrp/qemu-427815-console.sock,server=on,wait=off
-serial chardev:console -machine pc -accel kvm -cpu host -object
memory-backend-memfd,id=sysmem-file,size=2G --numa
node,memdev=sysmem-file -m 2048 -kernel
/home/cleber/avocado/data/cache/by_location/b4c64f15a75b083966d39d9246dd8db177736bb4/vmlinuz
-initrd /home/cleber/avocado/data/cache/by_location/b4c64f15a75b083966d39d9246dd8db177736bb4/initrd.img
-append printk.time=0 console=ttyS0 rdinit=/bin/bash -device
x-pci-proxy-dev,id=lsi1,fd=16'
Output: "qemu-system-x86_64: ../../src/qemu/softmmu/physmem.c:2055:
qemu_ram_alloc_from_fd: Assertion `(ram_flags & ~(RAM_SHARED |
RAM_PMEM | RAM_NORESERVE)) == 0' failed.\n"

I've bisected it to:

---

d5015b80134047013eeec10000df5ce2014ee114 is the first bad commit
commit d5015b80134047013eeec10000df5ce2014ee114
Author: David Hildenbrand <david@redhat.com>
Date:   Mon May 10 13:43:17 2021 +0200

    softmmu/memory: Pass ram_flags to qemu_ram_alloc_from_fd()

    Let's pass in ram flags just like we do with qemu_ram_alloc_from_file(),
    to clean up and prepare for more flags.

    Simplify the documentation of passed ram flags: Looking at our
    documentation of RAM_SHARED and RAM_PMEM is sufficient, no need to be
    repetitive.

    Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
    Reviewed-by: Peter Xu <peterx@redhat.com>
    Acked-by: Eduardo Habkost <ehabkost@redhat.com> for memory backend
and machine core
    Signed-off-by: David Hildenbrand <david@redhat.com>
    Message-Id: <20210510114328.21835-5-david@redhat.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

 backends/hostmem-memfd.c | 7 ++++---
 hw/misc/ivshmem.c        | 5 ++---
 include/exec/memory.h    | 9 +++------
 include/exec/ram_addr.h  | 6 +-----
 softmmu/memory.c         | 7 +++----
 5 files changed, 13 insertions(+), 21 deletions(-)

---

To reproduce it:

1. configure --target-list=x86_64-softmmu
2. meson compile
3. make check-venv
4. ./tests/venv/bin/avocado --show=test run --job-timeout=20s
tests/acceptance/multiprocess.py:Multiprocess.test_multiprocess_x86_64

It'd be helpful to know if anyone else is experiencing the same failure.

Thanks,
- Cleber.



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: tests/acceptance/multiprocess.py test failure
  2021-07-15  1:59 tests/acceptance/multiprocess.py test failure Cleber Rosa
@ 2021-07-15  8:16 ` David Hildenbrand
  2021-07-15 13:16   ` Cleber Rosa
  0 siblings, 1 reply; 7+ messages in thread
From: David Hildenbrand @ 2021-07-15  8:16 UTC (permalink / raw)
  To: Cleber Rosa, qemu-devel, David Hildenbrand, Paolo Bonzini,
	Elena Ufimtseva, John G Johnson, Jagannathan Raman,
	Willian Rampazzo

On 15.07.21 03:59, Cleber Rosa wrote:
> Hi everyone,
> 
> The tests/acceptance/multiprocess.py:Multiprocess.test_multiprocess_x86_64
> is currently failing (as of a9649a719a44894b81f38dc1c5c1888ee684acef).
> Unfortunately CI was unable to catch this issue earlier, because tests
> that require KVM are not yet running (but this should change soon).
> The relevant part of the test logs is:
> 
> VM launch command: './qemu-system-x86_64 -display none -vga none
> -chardev socket,id=mon,path=/var/tmp/avo_qemu_sock_5g22rvrp/qemu-427815-monitor.sock
> -mon chardev=mon,mode=control -chardev
> socket,id=console,path=/var/tmp/avo_qemu_sock_5g22rvrp/qemu-427815-console.sock,server=on,wait=off
> -serial chardev:console -machine pc -accel kvm -cpu host -object
> memory-backend-memfd,id=sysmem-file,size=2G --numa
> node,memdev=sysmem-file -m 2048 -kernel
> /home/cleber/avocado/data/cache/by_location/b4c64f15a75b083966d39d9246dd8db177736bb4/vmlinuz
> -initrd /home/cleber/avocado/data/cache/by_location/b4c64f15a75b083966d39d9246dd8db177736bb4/initrd.img
> -append printk.time=0 console=ttyS0 rdinit=/bin/bash -device
> x-pci-proxy-dev,id=lsi1,fd=16'
>>>> {'execute': 'qmp_capabilities'}
> 
> The test remains stuck here for as long as the test is allowed to run.
> Because there's currently no timeout in the test, it can remain stuck
> forever.  But, with a timeout, we end up getting:
> 
> Error launching VM
> Command: './qemu-system-x86_64 -display none -vga none -chardev
> socket,id=mon,path=/var/tmp/avo_qemu_sock_5g22rvrp/qemu-427815-monitor.sock
> -mon chardev=mon,mode=control -chardev
> socket,id=console,path=/var/tmp/avo_qemu_sock_5g22rvrp/qemu-427815-console.sock,server=on,wait=off
> -serial chardev:console -machine pc -accel kvm -cpu host -object
> memory-backend-memfd,id=sysmem-file,size=2G --numa
> node,memdev=sysmem-file -m 2048 -kernel
> /home/cleber/avocado/data/cache/by_location/b4c64f15a75b083966d39d9246dd8db177736bb4/vmlinuz
> -initrd /home/cleber/avocado/data/cache/by_location/b4c64f15a75b083966d39d9246dd8db177736bb4/initrd.img
> -append printk.time=0 console=ttyS0 rdinit=/bin/bash -device
> x-pci-proxy-dev,id=lsi1,fd=16'
> Output: "qemu-system-x86_64: ../../src/qemu/softmmu/physmem.c:2055:
> qemu_ram_alloc_from_fd: Assertion `(ram_flags & ~(RAM_SHARED |
> RAM_PMEM | RAM_NORESERVE)) == 0' failed.\n"
> 
> I've bisected it to:
> 
> ---
> 
> d5015b80134047013eeec10000df5ce2014ee114 is the first bad commit
> commit d5015b80134047013eeec10000df5ce2014ee114
> Author: David Hildenbrand <david@redhat.com>
> Date:   Mon May 10 13:43:17 2021 +0200
> 
>      softmmu/memory: Pass ram_flags to qemu_ram_alloc_from_fd()
> 
>      Let's pass in ram flags just like we do with qemu_ram_alloc_from_file(),
>      to clean up and prepare for more flags.
> 
>      Simplify the documentation of passed ram flags: Looking at our
>      documentation of RAM_SHARED and RAM_PMEM is sufficient, no need to be
>      repetitive.
> 
>      Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>      Reviewed-by: Peter Xu <peterx@redhat.com>
>      Acked-by: Eduardo Habkost <ehabkost@redhat.com> for memory backend
> and machine core
>      Signed-off-by: David Hildenbrand <david@redhat.com>
>      Message-Id: <20210510114328.21835-5-david@redhat.com>
>      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> 
>   backends/hostmem-memfd.c | 7 ++++---
>   hw/misc/ivshmem.c        | 5 ++---
>   include/exec/memory.h    | 9 +++------
>   include/exec/ram_addr.h  | 6 +-----
>   softmmu/memory.c         | 7 +++----
>   5 files changed, 13 insertions(+), 21 deletions(-)
> 
> ---
> 
> To reproduce it:
> 
> 1. configure --target-list=x86_64-softmmu
> 2. meson compile
> 3. make check-venv
> 4. ./tests/venv/bin/avocado --show=test run --job-timeout=20s
> tests/acceptance/multiprocess.py:Multiprocess.test_multiprocess_x86_64
> 
> It'd be helpful to know if anyone else is experiencing the same failure.

Hi,

maybe

https://lkml.kernel.org/r/20210709052800.63588-1-yang.zhong@intel.com

resolves your issue. If not, pleas let me know and I'll try reproducing 
(will have to install avocado).

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: tests/acceptance/multiprocess.py test failure
  2021-07-15  8:16 ` David Hildenbrand
@ 2021-07-15 13:16   ` Cleber Rosa
  2021-07-15 14:51     ` Jag Raman
  0 siblings, 1 reply; 7+ messages in thread
From: Cleber Rosa @ 2021-07-15 13:16 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Elena Ufimtseva, John G Johnson, Jagannathan Raman, qemu-devel,
	Willian Rampazzo, David Hildenbrand, Paolo Bonzini


David Hildenbrand writes:

>
> Hi,
>
> maybe
>
> https://lkml.kernel.org/r/20210709052800.63588-1-yang.zhong@intel.com
>
> resolves your issue. If not, pleas let me know and I'll try
> reproducing (will have to install avocado).

Hi David,

Yes, that fixes it.  Sorry for missing that patch on the ml.

Maintainers (Elena, Jagannathan, John),

Are you planning a PR with this patch?

Thanks,

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]
[  7ABB 96EB 8B46 B94D 5E0F  E9BB 657E 8D33 A5F2 09F3  ]



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: tests/acceptance/multiprocess.py test failure
  2021-07-15 13:16   ` Cleber Rosa
@ 2021-07-15 14:51     ` Jag Raman
  2021-07-20 18:39       ` Cleber Rosa
  0 siblings, 1 reply; 7+ messages in thread
From: Jag Raman @ 2021-07-15 14:51 UTC (permalink / raw)
  To: Cleber Rosa
  Cc: Elena Ufimtseva, John Johnson, David Hildenbrand, qemu-devel,
	Willian Rampazzo, David Hildenbrand, Paolo Bonzini



> On Jul 15, 2021, at 9:16 AM, Cleber Rosa <crosa@redhat.com> wrote:
> 
> 
> David Hildenbrand writes:
> 
>> 
>> Hi,
>> 
>> maybe
>> 
>> https://lkml.kernel.org/r/20210709052800.63588-1-yang.zhong@intel.com
>> 
>> resolves your issue. If not, pleas let me know and I'll try
>> reproducing (will have to install avocado).
> 
> Hi David,
> 
> Yes, that fixes it.  Sorry for missing that patch on the ml.
> 
> Maintainers (Elena, Jagannathan, John),
> 
> Are you planning a PR with this patch?

Hi Cleber,

We presently don’t have permissions to send a PR to
upstream (Peter Maydell).

Presently, we are requesting someone else who has
permissions to do PRs on our behalf. We will work
on getting permissions to send PRs going forward.

Thank you!
--
Jag

> 
> Thanks,
> 
> -- 
> Cleber Rosa
> [ Sr Software Engineer - Virtualization Team - Red Hat ]
> [ Avocado Test Framework - avocado-framework.github.io ]
> [  7ABB 96EB 8B46 B94D 5E0F  E9BB 657E 8D33 A5F2 09F3  ]
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: tests/acceptance/multiprocess.py test failure
  2021-07-15 14:51     ` Jag Raman
@ 2021-07-20 18:39       ` Cleber Rosa
  2021-07-20 20:13         ` Jag Raman
  0 siblings, 1 reply; 7+ messages in thread
From: Cleber Rosa @ 2021-07-20 18:39 UTC (permalink / raw)
  To: Jag Raman
  Cc: Elena Ufimtseva, John Johnson, David Hildenbrand, qemu-devel,
	Willian Rampazzo, David Hildenbrand, Paolo Bonzini


Jag Raman writes:

>
> Hi Cleber,
>
> We presently don’t have permissions to send a PR to
> upstream (Peter Maydell).
>
> Presently, we are requesting someone else who has
> permissions to do PRs on our behalf. We will work
> on getting permissions to send PRs going forward.
>
> Thank you!

Hi Jag,

I'm going to include that patch in an upcoming PR.  Please let me know
if this is not what you intended.

PS: I'm not sure I follow what your specific permission problem is, if
it's technical or something else.  But, in either case, I'd recommend you
sync the MAINTAINERS file entries with your roles/abilities to maintain
those files listed.

Best Regards,
- Cleber.



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: tests/acceptance/multiprocess.py test failure
  2021-07-20 18:39       ` Cleber Rosa
@ 2021-07-20 20:13         ` Jag Raman
  2021-07-20 20:20           ` Peter Maydell
  0 siblings, 1 reply; 7+ messages in thread
From: Jag Raman @ 2021-07-20 20:13 UTC (permalink / raw)
  To: Cleber Rosa
  Cc: Elena Ufimtseva, John Johnson, David Hildenbrand, qemu-devel,
	Willian Rampazzo, David Hildenbrand, Paolo Bonzini



> On Jul 20, 2021, at 2:39 PM, Cleber Rosa <crosa@redhat.com> wrote:
> 
> 
> Jag Raman writes:
> 
>> 
>> Hi Cleber,
>> 
>> We presently don’t have permissions to send a PR to
>> upstream (Peter Maydell).
>> 
>> Presently, we are requesting someone else who has
>> permissions to do PRs on our behalf. We will work
>> on getting permissions to send PRs going forward.
>> 
>> Thank you!
> 
> Hi Jag,
> 
> I'm going to include that patch in an upcoming PR.  Please let me know
> if this is not what you intended.
> 
> PS: I'm not sure I follow what your specific permission problem is, if
> it's technical or something else.  But, in either case, I'd recommend you
> sync the MAINTAINERS file entries with your roles/abilities to maintain
> those files listed.

Hi Cleber,

Thank you for including the patch in a PR.

I have not registered a GPG keys to submit PR - please see following
email for context:
https://www.mail-archive.com/qemu-devel@nongnu.org/msg765788.html

I’ll get started on this process as I can help with smaller patches.

Thank you!
—
Jag

> 
> Best Regards,
> - Cleber.
> 



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: tests/acceptance/multiprocess.py test failure
  2021-07-20 20:13         ` Jag Raman
@ 2021-07-20 20:20           ` Peter Maydell
  0 siblings, 0 replies; 7+ messages in thread
From: Peter Maydell @ 2021-07-20 20:20 UTC (permalink / raw)
  To: Jag Raman
  Cc: Elena Ufimtseva, John Johnson, David Hildenbrand, qemu-devel,
	Willian Rampazzo, David Hildenbrand, Cleber Rosa, Paolo Bonzini

On Tue, 20 Jul 2021 at 21:18, Jag Raman <jag.raman@oracle.com> wrote:
>
>
>
> > On Jul 20, 2021, at 2:39 PM, Cleber Rosa <crosa@redhat.com> wrote:
> >
> >
> > Jag Raman writes:
> >>
> >> We presently don’t have permissions to send a PR to
> >> upstream (Peter Maydell).
> >>
> >> Presently, we are requesting someone else who has
> >> permissions to do PRs on our behalf. We will work
> >> on getting permissions to send PRs going forward.

> > I'm going to include that patch in an upcoming PR.  Please let me know
> > if this is not what you intended.
> >
> > PS: I'm not sure I follow what your specific permission problem is, if
> > it's technical or something else.  But, in either case, I'd recommend you
> > sync the MAINTAINERS file entries with your roles/abilities to maintain
> > those files listed.

> I have not registered a GPG keys to submit PR - please see following
> email for context:
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg765788.html
>
> I’ll get started on this process as I can help with smaller patches.

This isn't a technical thing particularly -- I just prefer that
if you're not going to be submitting a lot of patches that they
go through some other submaintainer who can review and curate
them as they go past. I do not want us to have a structure
where we have 500 "submaintainers" all directly submitting PRs to me.

thanks
-- PMM


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-07-20 20:22 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-15  1:59 tests/acceptance/multiprocess.py test failure Cleber Rosa
2021-07-15  8:16 ` David Hildenbrand
2021-07-15 13:16   ` Cleber Rosa
2021-07-15 14:51     ` Jag Raman
2021-07-20 18:39       ` Cleber Rosa
2021-07-20 20:13         ` Jag Raman
2021-07-20 20:20           ` Peter Maydell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.