All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] sparc solaris guest, hsfs_putpage: dirty HSFS page
@ 2010-01-24  0:02 Artyom Tarasenko
  2010-01-24  8:56 ` [Qemu-devel] " Blue Swirl
  0 siblings, 1 reply; 6+ messages in thread
From: Artyom Tarasenko @ 2010-01-24  0:02 UTC (permalink / raw)
  To: qemu-devel, Blue Swirl

All solaris versions which currently boot (from cd) regularly produce buckets of
"hsfs_putpage: dirty HSFS page" messages.

High Sierra is a pretty old and stable stuff, so it is possible that
the code is similar to OpenSolaris.
I looked in debugger, and the function calls hierarchy looks pretty similar.

Now in the OpenSolaris source code there is a nice comment:
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/hsfs/hsfs_vnops.c#1758
/*
* Normally pvn_getdirty() should return 0, which
* impies that it has done the job for us.
* The shouldn't-happen scenario is when it returns 1.
* This means that the page has been modified and
* needs to be put back.
* Since we can't write on a CD, we fake a failed
* I/O and force pvn_write_done() to destroy the page.
*/
if (pvn_getdirty(pp, flags) == 1) {
		cmn_err(CE_NOTE,
			    "hsfs_putpage: dirty HSFS page");

Now the question: does the problem have to do with qemu caches (non-)emulation?
Can it be that we mark non-dirty pages dirty? Or does qemu always mark
pages dirty exactly to avoid cache emulation?

Otherwise it means something else goes astray and Solaris guest really
modifies the pages it shouldn't.

Just wonder what to dig first, MMU or IRQ emulation (the two most
obvious suspects).

-- 
Regards,
Artyom Tarasenko

solaris/sparc under qemu blog: http://tyom.blogspot.com/

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Qemu-devel] Re: sparc solaris guest, hsfs_putpage: dirty HSFS page
  2010-01-24  0:02 [Qemu-devel] sparc solaris guest, hsfs_putpage: dirty HSFS page Artyom Tarasenko
@ 2010-01-24  8:56 ` Blue Swirl
  2010-01-26 17:03   ` Artyom Tarasenko
  0 siblings, 1 reply; 6+ messages in thread
From: Blue Swirl @ 2010-01-24  8:56 UTC (permalink / raw)
  To: Artyom Tarasenko; +Cc: qemu-devel

On Sun, Jan 24, 2010 at 2:02 AM, Artyom Tarasenko
<atar4qemu@googlemail.com> wrote:
> All solaris versions which currently boot (from cd) regularly produce buckets of
> "hsfs_putpage: dirty HSFS page" messages.
>
> High Sierra is a pretty old and stable stuff, so it is possible that
> the code is similar to OpenSolaris.
> I looked in debugger, and the function calls hierarchy looks pretty similar.
>
> Now in the OpenSolaris source code there is a nice comment:
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/hsfs/hsfs_vnops.c#1758
> /*
> * Normally pvn_getdirty() should return 0, which
> * impies that it has done the job for us.
> * The shouldn't-happen scenario is when it returns 1.
> * This means that the page has been modified and
> * needs to be put back.
> * Since we can't write on a CD, we fake a failed
> * I/O and force pvn_write_done() to destroy the page.
> */
> if (pvn_getdirty(pp, flags) == 1) {
>                cmn_err(CE_NOTE,
>                            "hsfs_putpage: dirty HSFS page");
>
> Now the question: does the problem have to do with qemu caches (non-)emulation?
> Can it be that we mark non-dirty pages dirty? Or does qemu always mark
> pages dirty exactly to avoid cache emulation?
>
> Otherwise it means something else goes astray and Solaris guest really
> modifies the pages it shouldn't.
>
> Just wonder what to dig first, MMU or IRQ emulation (the two most
> obvious suspects).

Maybe the stores via MMU bypass ASIs should use
st[bwlq]_phys_notdirty. It can break display handling, though.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Qemu-devel] Re: sparc solaris guest, hsfs_putpage: dirty HSFS page
  2010-01-24  8:56 ` [Qemu-devel] " Blue Swirl
@ 2010-01-26 17:03   ` Artyom Tarasenko
  2010-01-26 19:23     ` Blue Swirl
  0 siblings, 1 reply; 6+ messages in thread
From: Artyom Tarasenko @ 2010-01-26 17:03 UTC (permalink / raw)
  To: Blue Swirl; +Cc: qemu-devel

2010/1/24 Blue Swirl <blauwirbel@gmail.com>:
> On Sun, Jan 24, 2010 at 2:02 AM, Artyom Tarasenko
> <atar4qemu@googlemail.com> wrote:
>> All solaris versions which currently boot (from cd) regularly produce buckets of
>> "hsfs_putpage: dirty HSFS page" messages.
>>
>> High Sierra is a pretty old and stable stuff, so it is possible that
>> the code is similar to OpenSolaris.
>> I looked in debugger, and the function calls hierarchy looks pretty similar.
>>
>> Now in the OpenSolaris source code there is a nice comment:
>> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/hsfs/hsfs_vnops.c#1758
>> /*
>> * Normally pvn_getdirty() should return 0, which
>> * impies that it has done the job for us.
>> * The shouldn't-happen scenario is when it returns 1.
>> * This means that the page has been modified and
>> * needs to be put back.
>> * Since we can't write on a CD, we fake a failed
>> * I/O and force pvn_write_done() to destroy the page.
>> */
>> if (pvn_getdirty(pp, flags) == 1) {
>>                cmn_err(CE_NOTE,
>>                            "hsfs_putpage: dirty HSFS page");
>>
>> Now the question: does the problem have to do with qemu caches (non-)emulation?
>> Can it be that we mark non-dirty pages dirty? Or does qemu always mark
>> pages dirty exactly to avoid cache emulation?
>>
>> Otherwise it means something else goes astray and Solaris guest really
>> modifies the pages it shouldn't.
>>
>> Just wonder what to dig first, MMU or IRQ emulation (the two most
>> obvious suspects).
>
> Maybe the stores via MMU bypass ASIs

why bypass stores? What about the non-bypass ones?

> should use
> st[bwlq]_phys_notdirty.

Seems that st[bw]_phys_notdirty are not implemeted yet?

I've changed [lq] for asi 0x20 and 21-2f and see no difference. Also I
put some debug printfs and see that none of these ASIs is called after
the Solaris kernel is loaded.

> It can break display handling, though.


-- 
Regards,
Artyom Tarasenko

solaris/sparc under qemu blog: http://tyom.blogspot.com/

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Qemu-devel] Re: sparc solaris guest, hsfs_putpage: dirty HSFS page
  2010-01-26 17:03   ` Artyom Tarasenko
@ 2010-01-26 19:23     ` Blue Swirl
  2010-01-26 22:42       ` Artyom Tarasenko
  0 siblings, 1 reply; 6+ messages in thread
From: Blue Swirl @ 2010-01-26 19:23 UTC (permalink / raw)
  To: Artyom Tarasenko; +Cc: qemu-devel

On Tue, Jan 26, 2010 at 7:03 PM, Artyom Tarasenko
<atar4qemu@googlemail.com> wrote:
> 2010/1/24 Blue Swirl <blauwirbel@gmail.com>:
>> On Sun, Jan 24, 2010 at 2:02 AM, Artyom Tarasenko
>> <atar4qemu@googlemail.com> wrote:
>>> All solaris versions which currently boot (from cd) regularly produce buckets of
>>> "hsfs_putpage: dirty HSFS page" messages.
>>>
>>> High Sierra is a pretty old and stable stuff, so it is possible that
>>> the code is similar to OpenSolaris.
>>> I looked in debugger, and the function calls hierarchy looks pretty similar.
>>>
>>> Now in the OpenSolaris source code there is a nice comment:
>>> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/hsfs/hsfs_vnops.c#1758
>>> /*
>>> * Normally pvn_getdirty() should return 0, which
>>> * impies that it has done the job for us.
>>> * The shouldn't-happen scenario is when it returns 1.
>>> * This means that the page has been modified and
>>> * needs to be put back.
>>> * Since we can't write on a CD, we fake a failed
>>> * I/O and force pvn_write_done() to destroy the page.
>>> */
>>> if (pvn_getdirty(pp, flags) == 1) {
>>>                cmn_err(CE_NOTE,
>>>                            "hsfs_putpage: dirty HSFS page");
>>>
>>> Now the question: does the problem have to do with qemu caches (non-)emulation?
>>> Can it be that we mark non-dirty pages dirty? Or does qemu always mark
>>> pages dirty exactly to avoid cache emulation?
>>>
>>> Otherwise it means something else goes astray and Solaris guest really
>>> modifies the pages it shouldn't.
>>>
>>> Just wonder what to dig first, MMU or IRQ emulation (the two most
>>> obvious suspects).
>>
>> Maybe the stores via MMU bypass ASIs
>
> why bypass stores? What about the non-bypass ones?

Because their use should update the PTE dirty bits.

>> should use
>> st[bwlq]_phys_notdirty.
>
> Seems that st[bw]_phys_notdirty are not implemeted yet?
>
> I've changed [lq] for asi 0x20 and 21-2f and see no difference. Also I
> put some debug printfs and see that none of these ASIs is called after
> the Solaris kernel is loaded.
>
>> It can break display handling, though.
>
>
> --
> Regards,
> Artyom Tarasenko
>
> solaris/sparc under qemu blog: http://tyom.blogspot.com/
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Qemu-devel] Re: sparc solaris guest, hsfs_putpage: dirty HSFS page
  2010-01-26 19:23     ` Blue Swirl
@ 2010-01-26 22:42       ` Artyom Tarasenko
  2010-01-27 18:01         ` Blue Swirl
  0 siblings, 1 reply; 6+ messages in thread
From: Artyom Tarasenko @ 2010-01-26 22:42 UTC (permalink / raw)
  To: Blue Swirl; +Cc: qemu-devel

2010/1/26 Blue Swirl <blauwirbel@gmail.com>:
> On Tue, Jan 26, 2010 at 7:03 PM, Artyom Tarasenko
> <atar4qemu@googlemail.com> wrote:
>> 2010/1/24 Blue Swirl <blauwirbel@gmail.com>:
>>> On Sun, Jan 24, 2010 at 2:02 AM, Artyom Tarasenko
>>> <atar4qemu@googlemail.com> wrote:
>>>> All solaris versions which currently boot (from cd) regularly produce buckets of
>>>> "hsfs_putpage: dirty HSFS page" messages.
>>>>
>>>> High Sierra is a pretty old and stable stuff, so it is possible that
>>>> the code is similar to OpenSolaris.
>>>> I looked in debugger, and the function calls hierarchy looks pretty similar.
>>>>
>>>> Now in the OpenSolaris source code there is a nice comment:
>>>> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/hsfs/hsfs_vnops.c#1758
>>>> /*
>>>> * Normally pvn_getdirty() should return 0, which
>>>> * impies that it has done the job for us.
>>>> * The shouldn't-happen scenario is when it returns 1.
>>>> * This means that the page has been modified and
>>>> * needs to be put back.
>>>> * Since we can't write on a CD, we fake a failed
>>>> * I/O and force pvn_write_done() to destroy the page.
>>>> */
>>>> if (pvn_getdirty(pp, flags) == 1) {
>>>>                cmn_err(CE_NOTE,
>>>>                            "hsfs_putpage: dirty HSFS page");
>>>>
>>>> Now the question: does the problem have to do with qemu caches (non-)emulation?
>>>> Can it be that we mark non-dirty pages dirty? Or does qemu always mark
>>>> pages dirty exactly to avoid cache emulation?
>>>>
>>>> Otherwise it means something else goes astray and Solaris guest really
>>>> modifies the pages it shouldn't.
>>>>
>>>> Just wonder what to dig first, MMU or IRQ emulation (the two most
>>>> obvious suspects).
>>>
>>> Maybe the stores via MMU bypass ASIs
>>
>> why bypass stores? What about the non-bypass ones?
>
> Because their use should update the PTE dirty bits.

update !=always set. Where is it implemented? I guess the code is
shared between multiple architectures.
Is there a way to trace at what point certain page is getting dirty?

Since it's not the bypass ASIs it must be something else.

>>> should use
>>> st[bwlq]_phys_notdirty.
>>
>> Seems that st[bw]_phys_notdirty are not implemeted yet?
>>
>> I've changed [lq] for asi 0x20 and 21-2f and see no difference. Also I
>> put some debug printfs and see that none of these ASIs is called after
>> the Solaris kernel is loaded.


-- 
Regards,
Artyom Tarasenko

solaris/sparc under qemu blog: http://tyom.blogspot.com/

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Qemu-devel] Re: sparc solaris guest, hsfs_putpage: dirty HSFS page
  2010-01-26 22:42       ` Artyom Tarasenko
@ 2010-01-27 18:01         ` Blue Swirl
  0 siblings, 0 replies; 6+ messages in thread
From: Blue Swirl @ 2010-01-27 18:01 UTC (permalink / raw)
  To: Artyom Tarasenko; +Cc: qemu-devel

On Tue, Jan 26, 2010 at 10:42 PM, Artyom Tarasenko
<atar4qemu@googlemail.com> wrote:
> 2010/1/26 Blue Swirl <blauwirbel@gmail.com>:
>> On Tue, Jan 26, 2010 at 7:03 PM, Artyom Tarasenko
>> <atar4qemu@googlemail.com> wrote:
>>> 2010/1/24 Blue Swirl <blauwirbel@gmail.com>:
>>>> On Sun, Jan 24, 2010 at 2:02 AM, Artyom Tarasenko
>>>> <atar4qemu@googlemail.com> wrote:
>>>>> All solaris versions which currently boot (from cd) regularly produce buckets of
>>>>> "hsfs_putpage: dirty HSFS page" messages.
>>>>>
>>>>> High Sierra is a pretty old and stable stuff, so it is possible that
>>>>> the code is similar to OpenSolaris.
>>>>> I looked in debugger, and the function calls hierarchy looks pretty similar.
>>>>>
>>>>> Now in the OpenSolaris source code there is a nice comment:
>>>>> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/hsfs/hsfs_vnops.c#1758
>>>>> /*
>>>>> * Normally pvn_getdirty() should return 0, which
>>>>> * impies that it has done the job for us.
>>>>> * The shouldn't-happen scenario is when it returns 1.
>>>>> * This means that the page has been modified and
>>>>> * needs to be put back.
>>>>> * Since we can't write on a CD, we fake a failed
>>>>> * I/O and force pvn_write_done() to destroy the page.
>>>>> */
>>>>> if (pvn_getdirty(pp, flags) == 1) {
>>>>>                cmn_err(CE_NOTE,
>>>>>                            "hsfs_putpage: dirty HSFS page");
>>>>>
>>>>> Now the question: does the problem have to do with qemu caches (non-)emulation?
>>>>> Can it be that we mark non-dirty pages dirty? Or does qemu always mark
>>>>> pages dirty exactly to avoid cache emulation?
>>>>>
>>>>> Otherwise it means something else goes astray and Solaris guest really
>>>>> modifies the pages it shouldn't.
>>>>>
>>>>> Just wonder what to dig first, MMU or IRQ emulation (the two most
>>>>> obvious suspects).
>>>>
>>>> Maybe the stores via MMU bypass ASIs
>>>
>>> why bypass stores? What about the non-bypass ones?
>>
>> Because their use should update the PTE dirty bits.
>
> update !=always set. Where is it implemented? I guess the code is
> shared between multiple architectures.
> Is there a way to trace at what point certain page is getting dirty?
>
> Since it's not the bypass ASIs it must be something else.

target-sparc/helper.c:193 for the page table dirtiness (this is
probably what Solaris can detect).

There is other kind of dirtiness in exec.c, grep for phys_ram_dirty
uses. But this should not be visible to guest.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2010-01-27 18:01 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-01-24  0:02 [Qemu-devel] sparc solaris guest, hsfs_putpage: dirty HSFS page Artyom Tarasenko
2010-01-24  8:56 ` [Qemu-devel] " Blue Swirl
2010-01-26 17:03   ` Artyom Tarasenko
2010-01-26 19:23     ` Blue Swirl
2010-01-26 22:42       ` Artyom Tarasenko
2010-01-27 18:01         ` Blue Swirl

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.