* using identity_mapping_add & switching MMU state - how ?
@ 2011-06-02 15:44 Frank Hofmann
2011-06-02 16:20 ` Will Deacon
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Frank Hofmann @ 2011-06-02 15:44 UTC (permalink / raw)
To: linux-arm-kernel
Hi,
I'm trying to find a way to do an MMU off / on transition.
What I want to do is to call cpu_do_resume() from the hibernation restore
codepath.
I've succeeded to make this work by adding a test whether the MMU is
already on when cpu_resume_mmu() is called.
I'm not sure that sort of thing is proper; hence I've been trying to find
a way to disable the MMU before calling cpu_do_resume().
Can't seem to get this to work though; even though I'm creating a separate
MMU context that's given 1:1 mappings for all of kernel code/data,
execution still hangs as soon as I enable the code section below that
switches the MMU off.
What I have at the moment is code that looks like this:
==============================================================================
[ ... ]
unsigned long notrace __swsusp_arch_restore_image(void)
{
extern struct pbe *restore_pblist;
struct pbe *pbe;
/* __swsusp_pg_dir has been created using pgd_alloc(&init_mm); */
cpu_switch_mm(__swsusp_pg_dir, &init_mm);
for (pbe = restore_pblist; pbe; pbe = pbe->next)
copy_page(pbe->orig_address, pbe->address);
flush_tlb_all();
flush_cache_all();
identity_mapping_add(__swsusp_pg_dir, __pa(_stext), __pa(_etext));
identity_mapping_add(__swsusp_pg_dir, __pa(_sdata), __pa(_edata));
cpu_switch_mm(__swsusp_pg_dir, &init_mm);
flush_tlb_all();
flush_cache_all();
cpu_proc_fin(); /* turns caches off */
/* caller requires v:p offset to calculate physical addresses */
return (unsigned long)(PHYS_OFFSET - PAGE_OFFSET);
}
[ ... ]
ENTRY(swsusp_arch_resume)
mov r2, #PSR_I_BIT | PSR_F_BIT | SVC_MODE
msr cpsr_c, r2
/*
* Switch stack to a nosavedata region to make sure image restore
* doesn't clobber it underneath itself.
*/
ldr sp, =(__swsusp_resume_stk + PAGE_SIZE / 2)
bl __swsusp_arch_restore_image
/*
* Restore the CPU registers.
*/
mov r1, r0
ldr r0, =(__swsusp_arch_ctx + (NREGS * 4))
/*
* This is what I'm trying to switch off; yet, doing so makes things hang
*/
#if 0
ldr r2, =cpu_do_resume
sub r2, r1 @ __pa()
ldr r3, =.Lmmu_is_off
sub r3, r1 @ __pa()
sub r0, r1 @ __pa()
ldr lr, =.Lpost_mmu
mrc p15, 0, r1, c1, c0, 0
bic r1, #CR_M
mcr p15, 0, r1, c1, c0, 0 @ MMU OFF
mrc p15, 0, r1, c2, c0, 0 @ queue a dependency on CP15
sub pc, r3, r1, lsr #32 @ to local label, phys addr
.ltorg
.align 5
.Lmmu_is_off:
mov pc, r2 @ jump to phys cpu_v6_do_resume
.Lpost_mmu:
#else
bl cpu_do_resume
#endif
ldr r0, =__swsusp_arch_ctx
ldmia r0!, {r1-r11,lr} @ nonvolatile regs
ldr sp, [r0] @ stack
msr cpsr, r1
msr spsr, r2
mov r0, #0
stmfd sp!, {r0, lr}
bl cpu_init @ reinitialize other modes
ldmfd sp!, {r0, pc}
ENDPROC(swsusp_arch_resume)
==============================================================================
I.e. it performs the steps:
- flush all caches, tlbs
- setup identity mappings for all kernel code & data
- switch to a self-contained pagedir
- flush again
- finish cpu (disable caches)
- switch MMU off, re-read config reg to force necessary wait,
and jump to physical address of "self".
As said, things work just fine if simply doing the "bl cpu_do_resume" and
adding a "MMU already on" check to cpu_resume_mmu().
I must be missing something there; I've been reading the ARM kexec
postings,
http://lists.infradead.org/pipermail/linux-arm-kernel/2010-July/020183.html
for the basic idea, and used the style from smp.c (alloc a temporary
pagedir, create identity mappings there). Still, there's something not
quite right ...
Any ideas what I'm missing ?
Thanks,
FrankH.
^ permalink raw reply [flat|nested] 11+ messages in thread
* using identity_mapping_add & switching MMU state - how ?
2011-06-02 15:44 using identity_mapping_add & switching MMU state - how ? Frank Hofmann
@ 2011-06-02 16:20 ` Will Deacon
2011-06-02 16:22 ` Dave Martin
` (2 subsequent siblings)
3 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2011-06-02 16:20 UTC (permalink / raw)
To: linux-arm-kernel
Hi Frank,
> I'm trying to find a way to do an MMU off / on transition.
You might want to chat with Per Fransson (CC'd) about this since he
posted something similar a while back:
http://lists.infradead.org/pipermail/linux-arm-kernel/2010-November/030470.html
I still believe we need some code along these lines for kexec and some
instances of CPU hotplug (e.g. on Versatile and RealView platforms).
Will
^ permalink raw reply [flat|nested] 11+ messages in thread
* using identity_mapping_add & switching MMU state - how ?
2011-06-02 15:44 using identity_mapping_add & switching MMU state - how ? Frank Hofmann
2011-06-02 16:20 ` Will Deacon
@ 2011-06-02 16:22 ` Dave Martin
2011-06-02 16:46 ` Frank Hofmann
2011-06-02 18:07 ` Lorenzo Pieralisi
2011-06-02 20:16 ` Russell King - ARM Linux
3 siblings, 1 reply; 11+ messages in thread
From: Dave Martin @ 2011-06-02 16:22 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jun 02, 2011 at 04:44:02PM +0100, Frank Hofmann wrote:
> Hi,
>
>
> I'm trying to find a way to do an MMU off / on transition.
>
> What I want to do is to call cpu_do_resume() from the hibernation
> restore codepath.
>
> I've succeeded to make this work by adding a test whether the MMU is
> already on when cpu_resume_mmu() is called.
>
> I'm not sure that sort of thing is proper; hence I've been trying to
> find a way to disable the MMU before calling cpu_do_resume().
>
> Can't seem to get this to work though; even though I'm creating a
> separate MMU context that's given 1:1 mappings for all of kernel
> code/data, execution still hangs as soon as I enable the code
> section below that switches the MMU off.
I think Per Fransson has been working on this area for kexec.
Given the intrinsic scariness of code for turning MMUs off, I think
it would be best if there's a single common implementation which is
usable for hibernation and kexec, and any other code which needs
to do this (CPU hotplug?)
Maybe Per has some ideas on how to do that...
Cheers
---Dave
>
>
> What I have at the moment is code that looks like this:
>
> ==============================================================================
> [ ... ]
> unsigned long notrace __swsusp_arch_restore_image(void)
> {
> extern struct pbe *restore_pblist;
> struct pbe *pbe;
>
> /* __swsusp_pg_dir has been created using pgd_alloc(&init_mm); */
>
> cpu_switch_mm(__swsusp_pg_dir, &init_mm);
>
> for (pbe = restore_pblist; pbe; pbe = pbe->next)
> copy_page(pbe->orig_address, pbe->address);
>
> flush_tlb_all();
> flush_cache_all();
>
> identity_mapping_add(__swsusp_pg_dir, __pa(_stext), __pa(_etext));
> identity_mapping_add(__swsusp_pg_dir, __pa(_sdata), __pa(_edata));
>
> cpu_switch_mm(__swsusp_pg_dir, &init_mm);
>
> flush_tlb_all();
> flush_cache_all();
>
> cpu_proc_fin(); /* turns caches off */
>
> /* caller requires v:p offset to calculate physical addresses */
> return (unsigned long)(PHYS_OFFSET - PAGE_OFFSET);
> }
>
> [ ... ]
> ENTRY(swsusp_arch_resume)
> mov r2, #PSR_I_BIT | PSR_F_BIT | SVC_MODE
> msr cpsr_c, r2
> /*
> * Switch stack to a nosavedata region to make sure image restore
> * doesn't clobber it underneath itself.
> */
> ldr sp, =(__swsusp_resume_stk + PAGE_SIZE / 2)
> bl __swsusp_arch_restore_image
>
> /*
> * Restore the CPU registers.
> */
> mov r1, r0
> ldr r0, =(__swsusp_arch_ctx + (NREGS * 4))
> /*
> * This is what I'm trying to switch off; yet, doing so makes things hang
> */
> #if 0
> ldr r2, =cpu_do_resume
> sub r2, r1 @ __pa()
> ldr r3, =.Lmmu_is_off
> sub r3, r1 @ __pa()
> sub r0, r1 @ __pa()
> ldr lr, =.Lpost_mmu
> mrc p15, 0, r1, c1, c0, 0
> bic r1, #CR_M
> mcr p15, 0, r1, c1, c0, 0 @ MMU OFF
>
> mrc p15, 0, r1, c2, c0, 0 @ queue a dependency on CP15
> sub pc, r3, r1, lsr #32 @ to local label, phys addr
> .ltorg
> .align 5
> .Lmmu_is_off:
> mov pc, r2 @ jump to phys cpu_v6_do_resume
> .Lpost_mmu:
> #else
> bl cpu_do_resume
> #endif
> ldr r0, =__swsusp_arch_ctx
> ldmia r0!, {r1-r11,lr} @ nonvolatile regs
> ldr sp, [r0] @ stack
> msr cpsr, r1
> msr spsr, r2
>
> mov r0, #0
> stmfd sp!, {r0, lr}
> bl cpu_init @ reinitialize other modes
> ldmfd sp!, {r0, pc}
> ENDPROC(swsusp_arch_resume)
> ==============================================================================
>
> I.e. it performs the steps:
>
> - flush all caches, tlbs
> - setup identity mappings for all kernel code & data
> - switch to a self-contained pagedir
> - flush again
> - finish cpu (disable caches)
> - switch MMU off, re-read config reg to force necessary wait,
> and jump to physical address of "self".
>
>
> As said, things work just fine if simply doing the "bl
> cpu_do_resume" and adding a "MMU already on" check to
> cpu_resume_mmu().
>
>
> I must be missing something there; I've been reading the ARM kexec
> postings,
>
> http://lists.infradead.org/pipermail/linux-arm-kernel/2010-July/020183.html
>
> for the basic idea, and used the style from smp.c (alloc a temporary
> pagedir, create identity mappings there). Still, there's something
> not quite right ...
>
>
> Any ideas what I'm missing ?
> Thanks,
>
> FrankH.
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 11+ messages in thread
* using identity_mapping_add & switching MMU state - how ?
2011-06-02 16:22 ` Dave Martin
@ 2011-06-02 16:46 ` Frank Hofmann
0 siblings, 0 replies; 11+ messages in thread
From: Frank Hofmann @ 2011-06-02 16:46 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, 2 Jun 2011, Dave Martin wrote:
> On Thu, Jun 02, 2011 at 04:44:02PM +0100, Frank Hofmann wrote:
>> Hi,
>>
>>
>> I'm trying to find a way to do an MMU off / on transition.
>>
>> What I want to do is to call cpu_do_resume() from the hibernation
>> restore codepath.
>>
>> I've succeeded to make this work by adding a test whether the MMU is
>> already on when cpu_resume_mmu() is called.
>>
>> I'm not sure that sort of thing is proper; hence I've been trying to
>> find a way to disable the MMU before calling cpu_do_resume().
>>
>> Can't seem to get this to work though; even though I'm creating a
>> separate MMU context that's given 1:1 mappings for all of kernel
>> code/data, execution still hangs as soon as I enable the code
>> section below that switches the MMU off.
>
> I think Per Fransson has been working on this area for kexec.
>
> Given the intrinsic scariness of code for turning MMUs off, I think
> it would be best if there's a single common implementation which is
> usable for hibernation and kexec, and any other code which needs
> to do this (CPU hotplug?)
>
> Maybe Per has some ideas on how to do that...
>
> Cheers
> ---Dave
Dave, Will,
thanks for the pointer.
I agree common code to do this sort of transition would be useful, both
for kexec and hibernation.
The code for cpu_reset() on the _older_ ARM architectures does switch the
MMU off, but said for v6/v7 isn't mainline (it's discussed in various
patch submissions, by Per and others, related to kexec over the last
year).
It seems not to be quite that simple as to add a cpu_reset() which zeroes
CR_M in the control bits. I'm unsure as to what works and who made it work
...
Ideally, I'd hope for being able to do the transition like this:
/* set up whatever 1:1 mappings are needed - which ones ?! */
flush_cache_all();
flush_tlb_all();
/* flush/disable whatever other caches ... */
cpu_proc_fin(); /* i/d cache off */
cpu_reset(__pa(target_function)); /* MMU off, jump physical */
which seems pretty much the usecase that machine_kexec() also does.
It just doesn't look like the code is reached.
The only real difference between the hibernation restore and the kexec
path, seems to be that kexec "reuses" current->mm(->pgd) while hibernation
restore uses init_mm and a throwaway pagedir; the latter means there's no
need attempting to save/restore the pmd's overwritten by the identity
mappings.
FrankH.
^ permalink raw reply [flat|nested] 11+ messages in thread
* using identity_mapping_add & switching MMU state - how ?
2011-06-02 15:44 using identity_mapping_add & switching MMU state - how ? Frank Hofmann
2011-06-02 16:20 ` Will Deacon
2011-06-02 16:22 ` Dave Martin
@ 2011-06-02 18:07 ` Lorenzo Pieralisi
2011-06-03 12:20 ` Frank Hofmann
2011-06-02 20:16 ` Russell King - ARM Linux
3 siblings, 1 reply; 11+ messages in thread
From: Lorenzo Pieralisi @ 2011-06-02 18:07 UTC (permalink / raw)
To: linux-arm-kernel
Hi Frank,
On Thu, 2011-06-02 at 16:44 +0100, Frank Hofmann wrote:
> Hi,
>
>
> I'm trying to find a way to do an MMU off / on transition.
>
> What I want to do is to call cpu_do_resume() from the hibernation restore
> codepath.
>
> I've succeeded to make this work by adding a test whether the MMU is
> already on when cpu_resume_mmu() is called.
>
> I'm not sure that sort of thing is proper; hence I've been trying to find
> a way to disable the MMU before calling cpu_do_resume().
>
> Can't seem to get this to work though; even though I'm creating a separate
> MMU context that's given 1:1 mappings for all of kernel code/data,
> execution still hangs as soon as I enable the code section below that
> switches the MMU off.
>
>
> What I have at the moment is code that looks like this:
>
> ==============================================================================
> [ ... ]
> unsigned long notrace __swsusp_arch_restore_image(void)
> {
> extern struct pbe *restore_pblist;
> struct pbe *pbe;
>
> /* __swsusp_pg_dir has been created using pgd_alloc(&init_mm); */
>
> cpu_switch_mm(__swsusp_pg_dir, &init_mm);
>
> for (pbe = restore_pblist; pbe; pbe = pbe->next)
> copy_page(pbe->orig_address, pbe->address);
>
> flush_tlb_all();
> flush_cache_all();
>
> identity_mapping_add(__swsusp_pg_dir, __pa(_stext), __pa(_etext));
> identity_mapping_add(__swsusp_pg_dir, __pa(_sdata), __pa(_edata));
>
> cpu_switch_mm(__swsusp_pg_dir, &init_mm);
>
> flush_tlb_all();
> flush_cache_all();
>
> cpu_proc_fin(); /* turns caches off */
>
> /* caller requires v:p offset to calculate physical addresses */
> return (unsigned long)(PHYS_OFFSET - PAGE_OFFSET);
> }
>
> [ ... ]
> ENTRY(swsusp_arch_resume)
> mov r2, #PSR_I_BIT | PSR_F_BIT | SVC_MODE
> msr cpsr_c, r2
> /*
> * Switch stack to a nosavedata region to make sure image restore
> * doesn't clobber it underneath itself.
> */
> ldr sp, =(__swsusp_resume_stk + PAGE_SIZE / 2)
> bl __swsusp_arch_restore_image
>
> /*
> * Restore the CPU registers.
> */
> mov r1, r0
> ldr r0, =(__swsusp_arch_ctx + (NREGS * 4))
> /*
> * This is what I'm trying to switch off; yet, doing so makes things hang
> */
> #if 0
> ldr r2, =cpu_do_resume
> sub r2, r1 @ __pa()
> ldr r3, =.Lmmu_is_off
> sub r3, r1 @ __pa()
> sub r0, r1 @ __pa()
> ldr lr, =.Lpost_mmu
> mrc p15, 0, r1, c1, c0, 0
> bic r1, #CR_M
> mcr p15, 0, r1, c1, c0, 0 @ MMU OFF
I know it is a silly question, but are you sure your pc is pointing
to a 1:1 physical address here ? This might explain also why if you call
cpu_do_resume straight off you have to skip the mmu resume code since it
creates a temporary mapping based on the current program counter. if it
is a virtual address it goes pop, methinks.
>
> mrc p15, 0, r1, c2, c0, 0 @ queue a dependency on CP15
> sub pc, r3, r1, lsr #32 @ to local label, phys addr
> .ltorg
> .align 5
> .Lmmu_is_off:
> mov pc, r2 @ jump to phys cpu_v6_do_resume
> .Lpost_mmu:
> #else
> bl cpu_do_resume
> #endif
> ldr r0, =__swsusp_arch_ctx
> ldmia r0!, {r1-r11,lr} @ nonvolatile regs
> ldr sp, [r0] @ stack
> msr cpsr, r1
> msr spsr, r2
>
> mov r0, #0
> stmfd sp!, {r0, lr}
> bl cpu_init @ reinitialize other modes
> ldmfd sp!, {r0, pc}
> ENDPROC(swsusp_arch_resume)
> ==============================================================================
>
> I.e. it performs the steps:
>
> - flush all caches, tlbs
> - setup identity mappings for all kernel code & data
> - switch to a self-contained pagedir
> - flush again
> - finish cpu (disable caches)
> - switch MMU off, re-read config reg to force necessary wait,
> and jump to physical address of "self".
>
>
> As said, things work just fine if simply doing the "bl cpu_do_resume" and
> adding a "MMU already on" check to cpu_resume_mmu().
See comment above, cpu_resume_mmu creates a 1:1 mapping based on the
current PC (it uses adr), it should do no harm unless PC is running
virtual.
Lorenzo
^ permalink raw reply [flat|nested] 11+ messages in thread
* using identity_mapping_add & switching MMU state - how ?
2011-06-02 15:44 using identity_mapping_add & switching MMU state - how ? Frank Hofmann
` (2 preceding siblings ...)
2011-06-02 18:07 ` Lorenzo Pieralisi
@ 2011-06-02 20:16 ` Russell King - ARM Linux
2011-06-03 10:44 ` Frank Hofmann
3 siblings, 1 reply; 11+ messages in thread
From: Russell King - ARM Linux @ 2011-06-02 20:16 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jun 02, 2011 at 04:44:02PM +0100, Frank Hofmann wrote:
> I'm trying to find a way to do an MMU off / on transition.
>
> What I want to do is to call cpu_do_resume() from the hibernation restore
> codepath.
>
> I've succeeded to make this work by adding a test whether the MMU is
> already on when cpu_resume_mmu() is called.
I think you're making things more complicated than they need to be.
> I'm not sure that sort of thing is proper; hence I've been trying to find
> a way to disable the MMU before calling cpu_do_resume().
Well, I don't really condone using cpu_do_resume() outside of its
original context of arch/arm/kernel/sleep.S - the two were written
to work together, and the processor specific bit is not really
designed to be used separately.
> Can't seem to get this to work though; even though I'm creating a
> separate MMU context that's given 1:1 mappings for all of kernel
> code/data, execution still hangs as soon as I enable the code section
> below that switches the MMU off.
I'm not surprised - cpu_do_resume() has some special entry requirements
which are not obvious by way of the rest of the code in
arch/arm/kernel/sleep.S. As I say, cpu_do_resume is not designed to be
used by other code other than what's in that file.
But... it requires the v:p offset in r1, and the _virtual_ address of
the code to continue at in lr. These are specifically saved for it by
cpu_suspend in arch/arm/kernel/sleep.S.
^ permalink raw reply [flat|nested] 11+ messages in thread
* using identity_mapping_add & switching MMU state - how ?
2011-06-02 20:16 ` Russell King - ARM Linux
@ 2011-06-03 10:44 ` Frank Hofmann
2011-06-03 15:56 ` Frank Hofmann
2011-06-03 23:53 ` Russell King - ARM Linux
0 siblings, 2 replies; 11+ messages in thread
From: Frank Hofmann @ 2011-06-03 10:44 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, 2 Jun 2011, Russell King - ARM Linux wrote:
> On Thu, Jun 02, 2011 at 04:44:02PM +0100, Frank Hofmann wrote:
>> I'm trying to find a way to do an MMU off / on transition.
>>
>> What I want to do is to call cpu_do_resume() from the hibernation restore
>> codepath.
>>
>> I've succeeded to make this work by adding a test whether the MMU is
>> already on when cpu_resume_mmu() is called.
>
> I think you're making things more complicated than they need to be.
I agree - should've been more precise; what I want as final code for the
hibernation paths is to _use_ cpu_suspend/cpu_resume directly. See below.
>
>> I'm not sure that sort of thing is proper; hence I've been trying to find
>> a way to disable the MMU before calling cpu_do_resume().
>
> Well, I don't really condone using cpu_do_resume() outside of its
> original context of arch/arm/kernel/sleep.S - the two were written
> to work together, and the processor specific bit is not really
> designed to be used separately.
With that I agree ...
But that is also exactly what I'm attempting to do - I don't want to
leapfrog anything in sleep.S, but ideally _use_ the code directly ... i.e.
call both cpu_suspend and cpu_resume from the hibernation suspend/resume
codepaths.
There's two issues currently blocking this:
a) hibernation and suspend-to-ram cannot share the register save area,
because suspend-to-ram could be invoked while writing the snapshot.
(the global sleep_save_sp is the issue there - the hibernation code
must do some sort of fixup to get that out of the way)
That's the reason why _at the moment_ the hibernation codepath is calling
cpu_do_suspend directly instead of cpu_suspend; mere laziness on my side
not yet having bothered to either change cpu_suspend or call it unmodified
via sp fixup and extract the reg buffer / phys addr afterwards.
The current behaviour, i.e. the direct invocation of cpu_do_suspend/resume
is not supposed to stay that way.
But the rather significant problem is the second issue:
b) cpu_resume must be called with MMU off - and doing that isn't working
for me at the moment.
Please don't get me wrong - I'm all with you on the _interface_ being
cpu_suspend / cpu_resume and agree that's what it should be.
To make it clear: Once I can do b) above, a) is trivial, just a small
restructuring of the code. Even using cpu_suspend as-is, unmodified, would
be possible by just temporarily fixing up sp before invoking it.
The problem is b) - no way I've attempted of switching the MMU off so far
has ended in a recoverable situation.
>
>> Can't seem to get this to work though; even though I'm creating a
>> separate MMU context that's given 1:1 mappings for all of kernel
>> code/data, execution still hangs as soon as I enable the code section
>> below that switches the MMU off.
>
> I'm not surprised - cpu_do_resume() has some special entry requirements
> which are not obvious by way of the rest of the code in
> arch/arm/kernel/sleep.S. As I say, cpu_do_resume is not designed to be
> used by other code other than what's in that file.
>
> But... it requires the v:p offset in r1, and the _virtual_ address of
> the code to continue at in lr. These are specifically saved for it by
> cpu_suspend in arch/arm/kernel/sleep.S.
I've read that - that's why the hibernation resume path has been written
this way. I'm trying to create the conditions necessary, i.e. set up
r0/r1/lr _and_ the MMU state in such a way that it's callable unmodified.
Thing is, as said, calling it as such works - as long as I make
cpu_resume_mmu (!) bypass MMU enabling if the MMU is on already.
Of course, all that proves is that I've got r0 (virtual) and lr (virtual)
right, as only the MMU enabler code actually uses the v:p offset.
But this is disgressing a little from the problem at hand, sorry.
The function being called is rather irrelevant for the problem I'm
attempting to address ... it's the MMU off transition that doesn't seem to
do the thing.
Even 'blinking' CR_M causes a hang for me; i.e. code like:
identity_mapping_add(pgd, __pa(_stext), __pa(_etext));
...
cpu_proc_fin(); /* caches off */
flush_tlb_all();
flush_cache_all();
/* isb(), dsb() make no difference ... */
}
and then returning to:
mrc p15, 0, r0, c0, c1, 0 @ read CR
bic r0, #CR_M
mcr p15, 0, r0, c0, c1, 0 @ off ...
orr r0, #CR_M
mcr p15, 0, r0, c0, c1, 0 @ on ...
... @ hoping it'll just continue here
doesn't work, it ends up at cloud kuckoo country.
FrankH.
^ permalink raw reply [flat|nested] 11+ messages in thread
* using identity_mapping_add & switching MMU state - how ?
2011-06-02 18:07 ` Lorenzo Pieralisi
@ 2011-06-03 12:20 ` Frank Hofmann
0 siblings, 0 replies; 11+ messages in thread
From: Frank Hofmann @ 2011-06-03 12:20 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, 2 Jun 2011, Lorenzo Pieralisi wrote:
[ ... ]
>> mrc p15, 0, r1, c1, c0, 0
>> bic r1, #CR_M
>> mcr p15, 0, r1, c1, c0, 0 @ MMU OFF
>
> I know it is a silly question, but are you sure your pc is pointing
> to a 1:1 physical address here ? This might explain also why if you call
> cpu_do_resume straight off you have to skip the mmu resume code since it
> creates a temporary mapping based on the current program counter. if it
> is a virtual address it goes pop, methinks.
You could be right there - that might actually _be_ the silly thing. After
all, I'm coming from virtual ...
As said, I can't do the following:
mrc p15, 0, r1, c1, c0, 0
bic r1, #CR_M
mcr p15, 0, r1, c1, c0, 0 @ MMU OFF
mrc p15, 0, r1, c1, c0, 0
orr r1, #CR_M
mcr p15, 0, r1, c1, c0, 0 @ MMU ON
That ends in a hang; but yes the pc when exexuting this is still
_virtual_.
[ ... ]
> See comment above, cpu_resume_mmu creates a 1:1 mapping based on the
> current PC (it uses adr), it should do no harm unless PC is running
> virtual.
Now just for the fun of it, I've attempted to do (note: on v6 - cpu_reset
is only a `mov pc, r0` there, no MMU off):
identity_mapping_add(pgd_that_is_in_use, __pa(_stext), __pa(_etext));
... flush things ...
cpu_proc_fin();
cpu_reset(__pa(__builtin_return_address(0))); /* return physical */
just to test whether a transfer to a physical / 1:1 mapped addr works,
i.e. if the 1:1 mapping is ok. At return, then:
ldr r4, =.Lvirt_label @ pc-relative load
mov pc, r4 @ back to virtual
.ltorg
.align 5
.Lvirt_label:
/* continue as if normal */
to go back to virtual. But that also hangs.
FrankH.
^ permalink raw reply [flat|nested] 11+ messages in thread
* using identity_mapping_add & switching MMU state - how ?
2011-06-03 10:44 ` Frank Hofmann
@ 2011-06-03 15:56 ` Frank Hofmann
2011-06-03 23:53 ` Russell King - ARM Linux
1 sibling, 0 replies; 11+ messages in thread
From: Frank Hofmann @ 2011-06-03 15:56 UTC (permalink / raw)
To: linux-arm-kernel
Hi Russell, hi all,
thanks for the help and suggestions; I've managed to get the V->P->V
transition to work, so the next thing to do for the hibernation code is to
fix it up in such a way that it directly calls cpu_suspend/cpu_resume.
Job for next week.
What I've found that caused a problem was the use of a "mint" pagedir.
I've been using this:
/* pre-resume */
pgd_t __swsusp_pg_dir = pgd_alloc(&init_mm);
cpu_switch_mm(__swsusp_pg_dir, &init_mm);
... [ restore copy loop ] ...
identity_mapping_add(__swsusp_pg_dir, __pa(_stext), __pa(_etext));
identity_mapping_add(__swsusp_pg_dir, __pa(_sdata), __pa(_edata));
cpu_switch_mm(__swsusp_pg_dir, &init_mm);
cpu_proc_fin();
flush_tlb_all();
flush_cache_all();
cpu_reset(__pa(__builtin_return_address(0)));
to return to the 1:1-mapped resume function.
But that would hang; I believe it's simply fallen foul of the kernel heap
overwrite done by the copy loop.
I've noticed it works just fine if using the initial tables.
cpu_switch_mm(swapper_pg_dir, &init_mm);
and setting the identity mappings within swapper_pg_dir.
I'd prefer to get away from "abusing" swapper_pg_dir for this; I don't
think it should have these identity mappings in (and right now I'm not
attempting to take them out again). But I don't yet have a good idea where
else to take a usable pagedir from.
Anyway ... a great weekend to everyone !
FrankH.
^ permalink raw reply [flat|nested] 11+ messages in thread
* using identity_mapping_add & switching MMU state - how ?
2011-06-03 10:44 ` Frank Hofmann
2011-06-03 15:56 ` Frank Hofmann
@ 2011-06-03 23:53 ` Russell King - ARM Linux
2011-06-06 16:32 ` Frank Hofmann
1 sibling, 1 reply; 11+ messages in thread
From: Russell King - ARM Linux @ 2011-06-03 23:53 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Jun 03, 2011 at 11:44:02AM +0100, Frank Hofmann wrote:
> On Thu, 2 Jun 2011, Russell King - ARM Linux wrote:
>> But... it requires the v:p offset in r1, and the _virtual_ address of
>> the code to continue at in lr. These are specifically saved for it by
>> cpu_suspend in arch/arm/kernel/sleep.S.
>
> I've read that - that's why the hibernation resume path has been written
> this way. I'm trying to create the conditions necessary, i.e. set up
> r0/r1/lr _and_ the MMU state in such a way that it's callable unmodified.
I think you missed my point. Go back and look at this bit of your code,
and indentify what _exactly_ is in r1 at the point cpu_do_resume is
called. I'll give you a hint - it's not the v:p offset as required.
ldr r2, =cpu_do_resume
sub r2, r1 @ __pa()
ldr r3, =.Lmmu_is_off
sub r3, r1 @ __pa()
sub r0, r1 @ __pa()
ldr lr, =.Lpost_mmu
mrc p15, 0, r1, c1, c0, 0
bic r1, #CR_M
mcr p15, 0, r1, c1, c0, 0 @ MMU OFF
mrc p15, 0, r1, c2, c0, 0 @ queue a dependency on CP15
sub pc, r3, r1, lsr #32 @ to local label, phys addr
.ltorg
.align 5
.Lmmu_is_off:
mov pc, r2 @ jump to phys cpu_v6_do_resume
^ permalink raw reply [flat|nested] 11+ messages in thread
* using identity_mapping_add & switching MMU state - how ?
2011-06-03 23:53 ` Russell King - ARM Linux
@ 2011-06-06 16:32 ` Frank Hofmann
0 siblings, 0 replies; 11+ messages in thread
From: Frank Hofmann @ 2011-06-06 16:32 UTC (permalink / raw)
To: linux-arm-kernel
On Sat, 4 Jun 2011, Russell King - ARM Linux wrote:
> On Fri, Jun 03, 2011 at 11:44:02AM +0100, Frank Hofmann wrote:
>> On Thu, 2 Jun 2011, Russell King - ARM Linux wrote:
>>> But... it requires the v:p offset in r1, and the _virtual_ address of
>>> the code to continue at in lr. These are specifically saved for it by
>>> cpu_suspend in arch/arm/kernel/sleep.S.
>>
>> I've read that - that's why the hibernation resume path has been written
>> this way. I'm trying to create the conditions necessary, i.e. set up
>> r0/r1/lr _and_ the MMU state in such a way that it's callable unmodified.
>
> I think you missed my point. Go back and look at this bit of your code,
> and indentify what _exactly_ is in r1 at the point cpu_do_resume is
> called. I'll give you a hint - it's not the v:p offset as required.
Ah, ok, double stupid mistake really - the inverted v:p, and then #CR_M
for the MMU off ... neither is correct.
Kudos to your eyesight ;-) Great spotting ! I wish every reviewer were
that thorough.
I got the MMU transition to work on Friday - where the above two come in
... I wasn't verbose enough then to say what exactly needed changing to
make it work.
I've adapted the hibernation code now so that it does directly use
cpu_suspend / cpu_resume, and it appears to work on v6 / ARM1176 (need to
test others still - OMAP3 ...).
The hibernation restore code now calls cpu_resume() like this:
void * notrace __swsusp_arch_restore_image(void)
{
extern struct pbe *restore_pblist;
extern void cpu_resume(void);
extern void *sleep_save_sp;
struct pbe *pbe;
cpu_switch_mm(swapper_pg_dir, &init_mm);
for (pbe = restore_pblist; pbe; pbe = pbe->next)
copy_page(pbe->orig_address, pbe->address);
sleep_save_sp = *((u32*)__swsusp_arch_ctx);
flush_tlb_all();
flush_cache_all();
identity_mapping_add(swapper_pg_dir, __pa(_stext), __pa(_etext));
identity_mapping_add(swapper_pg_dir, __pa(_sdata), __pa(_edata));
flush_tlb_all();
flush_cache_all();
cpu_proc_fin();
flush_tlb_all();
flush_cache_all();
cpu_reset(__pa(cpu_resume));
}
There's a few things about it that I'm not fully happy with quite yet:
a) The fixup of sleep_save_sp is ugly (and assumes that for SMP, the
boot CPU is always number 0). The really ugly bit is the assembly
code in swsusp_arch_suspend ...
It'd be better to have an interface for setting / retrieving it.
How do you think this should be dealt with ?
b) Embedding 1:1 mappings for the MMU off transition into swapper_pg_dir
is currently done without cleaning up.
The hibernation code should really leave no traces behind.
c) it needs a cpu_reset that disables the MMU. cpu_v6/v7_reset don't do
so, hence a corresponding modification to arch/arm/mm/proc-*.S is
needed (that's no different to e.g. KEXEC, really ...)
I'll send another patch once I've done OMAP3 testing.
Thanks,
FrankH.
>
> ldr r2, =cpu_do_resume
> sub r2, r1 @ __pa()
> ldr r3, =.Lmmu_is_off
> sub r3, r1 @ __pa()
> sub r0, r1 @ __pa()
> ldr lr, =.Lpost_mmu
> mrc p15, 0, r1, c1, c0, 0
> bic r1, #CR_M
> mcr p15, 0, r1, c1, c0, 0 @ MMU OFF
>
> mrc p15, 0, r1, c2, c0, 0 @ queue a dependency on CP15
> sub pc, r3, r1, lsr #32 @ to local label, phys addr
> .ltorg
> .align 5
> .Lmmu_is_off:
> mov pc, r2 @ jump to phys cpu_v6_do_resume
>
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2011-06-06 16:32 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-06-02 15:44 using identity_mapping_add & switching MMU state - how ? Frank Hofmann
2011-06-02 16:20 ` Will Deacon
2011-06-02 16:22 ` Dave Martin
2011-06-02 16:46 ` Frank Hofmann
2011-06-02 18:07 ` Lorenzo Pieralisi
2011-06-03 12:20 ` Frank Hofmann
2011-06-02 20:16 ` Russell King - ARM Linux
2011-06-03 10:44 ` Frank Hofmann
2011-06-03 15:56 ` Frank Hofmann
2011-06-03 23:53 ` Russell King - ARM Linux
2011-06-06 16:32 ` Frank Hofmann
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.