From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Elrt5-0001eB-AC for qemu-devel@nongnu.org; Mon, 12 Dec 2005 12:54:59 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1Elrs6-00019p-6C for qemu-devel@nongnu.org; Mon, 12 Dec 2005 12:54:00 -0500 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Elrda-0003bt-4I for qemu-devel@nongnu.org; Mon, 12 Dec 2005 12:38:58 -0500 Received: from [195.129.94.187] (helo=srv94-187.ip-tech.ch) by monty-python.gnu.org with esmtp (TLS-1.0:DHE_RSA_3DES_EDE_CBC_SHA:24) (Exim 4.34) id 1ElrfJ-0007CO-C6 for qemu-devel@nongnu.org; Mon, 12 Dec 2005 12:40:47 -0500 Mime-Version: 1.0 (Apple Message framework v748) In-Reply-To: References: <0BE5F4F1-1337-43E6-AD37-ED47FCE3BDCB@stud.tu-ilmenau.de> <0BF839A5-2D32-44FD-9E19-B3EEFA3721E7@kberg.ch> <8B6EFA45-DC47-47FC-91B0-9B5CB41808D2@kberg.ch> <57692F3D-3487-48EB-AA0C-43124745EF3E@stud.tu-ilmenau.de> <2891E47F-E788-4FFD-8FCE-2F9C78C41CDE@kberg.ch> Content-Type: multipart/mixed; boundary=Apple-Mail-1--664408552 Message-Id: <3C424D8F-05C8-4F06-A1D5-4D37B0F7017D@kberg.ch> From: Mike Kronenberg Subject: Re: [Qemu-devel] Mac OS X issues Date: Mon, 12 Dec 2005 18:38:17 +0100 Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org --Apple-Mail-1--664408552 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed Attached a diff against the last fully working CVS Snapshot I had. (ie, before mp was commited) On 11.12.2005, at 21:47, Joachim Henke wrote: >> gcc4 and -fno-tree-ch did the trick for me, too. >> -fno-tree-ch was mentioned earlyer on this list, to compile with >> gcc4 on OS X. But since gcc4 is still not in the default >> toolchain, I did not even try :(. >> >> Seams that we have a problem with gcc3.3 and not gcc4 for once :) >> The error behavement is similar on your machine. I also got bus >> error (Sometimes it did not reach the menu, too. I started to hit >> the 3 very early, so I could go right thru it... Early crashes >> always happened for me on MS-DOS 6.22 and DOS 7. >> >> Tested it with DOS 6.22 and DOS 7 (win95). No Problems so far. > > In the meanwhile I tracked down the whole thing a little bit. > Debugging with GDB returns these messages when qemu (pure GCC3 > build) crashes: > > Program received signal EXC_BAD_ACCESS, Could not access memory. > Reason: KERN_PROTECTION_FAILURE at address: 0x00000034 > 0x000621e0 in tb_invalidate_phys_page_range (start=630485, > end=630486, is_cpu_write_access=1) at /Volumes/Data/build/qemu/ > exec.c:491 > 491 tb2 = tb1->jmp_next[n1]; > > So one could assume that the problem is in exec.c - but some more > compiling tests have shown, that GCC4 is required _only_ for cpu- > exec.c. All other source files can be build with GCC3 and qemu will > still run stable on Mac OS X (such a mixed build is probably faster > than a pure GCC4 build). I feel indeed that the problem is > somewhere in cpu-exec.c or one of its includes - maybe they init > the data that exec.c crashes on. Hopefully I'm a bit luckier to > find it tomorrow. I'm now scanning thru the changes in exec.c... >> I hope Fabrice stumbles upon this. >> >> Maybe we should make the Patch a little more selective with "ifeq >> ($(CONFIG_DARWIN),yes)" and post it. >> Don't know whether this affects other Platforms, too... > > Yes, my patch was just a simple one for experimental purposes. It > must not go into CVS since it breaks compilation with GCC3 or > earlier. Despite the fact that GCC4 is required only for cpu- > exec.c, I also think that building with GCC4 should be generally > provided, since all emulated architectures compile cleanly with it > (at least on Mac OS X). > > Instead of requiring --disable-gcc-check, the configure script > could write the option HAVE_GCC4=yes to config-host.mak, if it > detects GCC4. The warning message could be kept. Makefile.target > can then decide which CFLAGS should be used. > > I could create such a patch if people agree. my guess is, that things stay as they are, since gcc4 is not a target in the near future as stated many times. The GCC4 warnings was only added, because there where a lot of annoying posts on the list about "can't compile on GCC4". So GCC4 is only a workaround :) for now. Thing is, that only qemu-system is working when compiled by gcc4, qemu-user not. Mike --Apple-Mail-1--664408552 Content-Transfer-Encoding: 7bit Content-Type: application/octet-stream; x-unix-mode=0644; name="regression2.diff" Content-Disposition: attachment; filename=regression2.diff --- qemu/exec.c 2005-11-28 22:19:04.000000000 +0100 +++ qemu_osx_midi/exec.c 2005-09-03 12:49:04.000000000 +0200 @@ -61,6 +61,7 @@ #endif TranslationBlock tbs[CODE_GEN_MAX_BLOCKS]; +TranslationBlock *tb_hash[CODE_GEN_HASH_SIZE]; TranslationBlock *tb_phys_hash[CODE_GEN_PHYS_HASH_SIZE]; int nb_tbs; /* any access to the tbs or the page table must use this lock */ @@ -74,11 +75,6 @@ uint8_t *phys_ram_base; uint8_t *phys_ram_dirty; -CPUState *first_cpu; -/* current CPU in the current thread. It is only valid inside - cpu_exec() */ -CPUState *cpu_single_env; - typedef struct PageDesc { /* list of TBs intersecting this ram page */ TranslationBlock *first_tb; @@ -96,6 +92,20 @@ uint32_t phys_offset; } PhysPageDesc; +/* Note: the VirtPage handling is absolete and will be suppressed + ASAP */ +typedef struct VirtPageDesc { + /* physical address of code page. It is valid only if 'valid_tag' + matches 'virt_valid_tag' */ + target_ulong phys_addr; + unsigned int valid_tag; +#if !defined(CONFIG_SOFTMMU) + /* original page access rights. It is valid only if 'valid_tag' + matches 'virt_valid_tag' */ + unsigned int prot; +#endif +} VirtPageDesc; + #define L2_BITS 10 #define L1_BITS (32 - L2_BITS - TARGET_PAGE_BITS) @@ -113,6 +123,17 @@ static PageDesc *l1_map[L1_SIZE]; PhysPageDesc **l1_phys_map; +#if !defined(CONFIG_USER_ONLY) +#if TARGET_LONG_BITS > 32 +#define VIRT_L_BITS 9 +#define VIRT_L_SIZE (1 << VIRT_L_BITS) +static void *l1_virt_map[VIRT_L_SIZE]; +#else +static VirtPageDesc *l1_virt_map[L1_SIZE]; +#endif +static unsigned int virt_valid_tag; +#endif + /* io memory support */ CPUWriteMemoryFunc *io_mem_write[IO_MEM_NB_ENTRIES][4]; CPUReadMemoryFunc *io_mem_read[IO_MEM_NB_ENTRIES][4]; @@ -169,6 +190,9 @@ while ((1 << qemu_host_page_bits) < qemu_host_page_size) qemu_host_page_bits++; qemu_host_page_mask = ~(qemu_host_page_size - 1); +#if !defined(CONFIG_USER_ONLY) + virt_valid_tag = 1; +#endif l1_phys_map = qemu_vmalloc(L1_SIZE * sizeof(void *)); memset(l1_phys_map, 0, L1_SIZE * sizeof(void *)); } @@ -238,30 +262,133 @@ } #if !defined(CONFIG_USER_ONLY) -static void tlb_protect_code(ram_addr_t ram_addr); +static void tlb_protect_code(CPUState *env, ram_addr_t ram_addr, + target_ulong vaddr); static void tlb_unprotect_code_phys(CPUState *env, ram_addr_t ram_addr, target_ulong vaddr); + +static VirtPageDesc *virt_page_find_alloc(target_ulong index, int alloc) +{ +#if TARGET_LONG_BITS > 32 + void **p, **lp; + + p = l1_virt_map; + lp = p + ((index >> (5 * VIRT_L_BITS)) & (VIRT_L_SIZE - 1)); + p = *lp; + if (!p) { + if (!alloc) + return NULL; + p = qemu_mallocz(sizeof(void *) * VIRT_L_SIZE); + *lp = p; + } + lp = p + ((index >> (4 * VIRT_L_BITS)) & (VIRT_L_SIZE - 1)); + p = *lp; + if (!p) { + if (!alloc) + return NULL; + p = qemu_mallocz(sizeof(void *) * VIRT_L_SIZE); + *lp = p; + } + lp = p + ((index >> (3 * VIRT_L_BITS)) & (VIRT_L_SIZE - 1)); + p = *lp; + if (!p) { + if (!alloc) + return NULL; + p = qemu_mallocz(sizeof(void *) * VIRT_L_SIZE); + *lp = p; + } + lp = p + ((index >> (2 * VIRT_L_BITS)) & (VIRT_L_SIZE - 1)); + p = *lp; + if (!p) { + if (!alloc) + return NULL; + p = qemu_mallocz(sizeof(void *) * VIRT_L_SIZE); + *lp = p; + } + lp = p + ((index >> (1 * VIRT_L_BITS)) & (VIRT_L_SIZE - 1)); + p = *lp; + if (!p) { + if (!alloc) + return NULL; + p = qemu_mallocz(sizeof(VirtPageDesc) * VIRT_L_SIZE); + *lp = p; + } + return ((VirtPageDesc *)p) + (index & (VIRT_L_SIZE - 1)); +#else + VirtPageDesc *p, **lp; + + lp = &l1_virt_map[index >> L2_BITS]; + p = *lp; + if (!p) { + /* allocate if not found */ + if (!alloc) + return NULL; + p = qemu_mallocz(sizeof(VirtPageDesc) * L2_SIZE); + *lp = p; + } + return p + (index & (L2_SIZE - 1)); +#endif +} + +static inline VirtPageDesc *virt_page_find(target_ulong index) +{ + return virt_page_find_alloc(index, 0); +} + +#if TARGET_LONG_BITS > 32 +static void virt_page_flush_internal(void **p, int level) +{ + int i; + if (level == 0) { + VirtPageDesc *q = (VirtPageDesc *)p; + for(i = 0; i < VIRT_L_SIZE; i++) + q[i].valid_tag = 0; + } else { + level--; + for(i = 0; i < VIRT_L_SIZE; i++) { + if (p[i]) + virt_page_flush_internal(p[i], level); + } + } +} #endif -void cpu_exec_init(CPUState *env) +static void virt_page_flush(void) { - CPUState **penv; - int cpu_index; + virt_valid_tag++; + if (virt_valid_tag == 0) { + virt_valid_tag = 1; +#if TARGET_LONG_BITS > 32 + virt_page_flush_internal(l1_virt_map, 5); +#else + { + int i, j; + VirtPageDesc *p; + for(i = 0; i < L1_SIZE; i++) { + p = l1_virt_map[i]; + if (p) { + for(j = 0; j < L2_SIZE; j++) + p[j].valid_tag = 0; + } + } + } +#endif + } +} +#else +static void virt_page_flush(void) +{ +} +#endif + +void cpu_exec_init(void) +{ if (!code_gen_ptr) { code_gen_ptr = code_gen_buffer; page_init(); io_mem_init(); } - env->next_cpu = NULL; - penv = &first_cpu; - cpu_index = 0; - while (*penv != NULL) { - penv = (CPUState **)&(*penv)->next_cpu; - cpu_index++; - } - env->cpu_index = cpu_index; - *penv = env; } static inline void invalidate_page_bitmap(PageDesc *p) @@ -293,9 +420,8 @@ /* flush all the translation blocks */ /* XXX: tb_flush is currently not thread safe */ -void tb_flush(CPUState *env1) +void tb_flush(CPUState *env) { - CPUState *env; #if defined(DEBUG_FLUSH) printf("qemu: flush code_size=%d nb_tbs=%d avg_tb_size=%d\n", code_gen_ptr - code_gen_buffer, @@ -303,10 +429,8 @@ nb_tbs > 0 ? (code_gen_ptr - code_gen_buffer) / nb_tbs : 0); #endif nb_tbs = 0; - - for(env = first_cpu; env != NULL; env = env->next_cpu) { - memset (env->tb_jmp_cache, 0, TB_JMP_CACHE_SIZE * sizeof (void *)); - } + memset (tb_hash, 0, CODE_GEN_HASH_SIZE * sizeof (void *)); + virt_page_flush(); memset (tb_phys_hash, 0, CODE_GEN_PHYS_HASH_SIZE * sizeof (void *)); page_flush_tb(); @@ -442,39 +566,27 @@ tb_set_jmp_target(tb, n, (unsigned long)(tb->tc_ptr + tb->tb_next_offset[n])); } -static inline void tb_phys_invalidate(TranslationBlock *tb, unsigned int page_addr) +static inline void tb_invalidate(TranslationBlock *tb) { - CPUState *env; - PageDesc *p; unsigned int h, n1; - target_ulong phys_pc; - TranslationBlock *tb1, *tb2; + TranslationBlock *tb1, *tb2, **ptb; - /* remove the TB from the hash list */ - phys_pc = tb->page_addr[0] + (tb->pc & ~TARGET_PAGE_MASK); - h = tb_phys_hash_func(phys_pc); - tb_remove(&tb_phys_hash[h], tb, - offsetof(TranslationBlock, phys_hash_next)); - - /* remove the TB from the page list */ - if (tb->page_addr[0] != page_addr) { - p = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS); - tb_page_remove(&p->first_tb, tb); - invalidate_page_bitmap(p); - } - if (tb->page_addr[1] != -1 && tb->page_addr[1] != page_addr) { - p = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS); - tb_page_remove(&p->first_tb, tb); - invalidate_page_bitmap(p); - } - tb_invalidated_flag = 1; /* remove the TB from the hash list */ - h = tb_jmp_cache_hash_func(tb->pc); - for(env = first_cpu; env != NULL; env = env->next_cpu) { - if (env->tb_jmp_cache[h] == tb) - env->tb_jmp_cache[h] = NULL; + h = tb_hash_func(tb->pc); + ptb = &tb_hash[h]; + for(;;) { + tb1 = *ptb; + /* NOTE: the TB is not necessarily linked in the hash. It + indicates that it is not currently used */ + if (tb1 == NULL) + return; + if (tb1 == tb) { + *ptb = tb1->hash_next; + break; + } + ptb = &tb1->hash_next; } /* suppress this TB from the two jump lists */ @@ -494,7 +606,33 @@ tb1 = tb2; } tb->jmp_first = (TranslationBlock *)((long)tb | 2); /* fail safe */ +} + +static inline void tb_phys_invalidate(TranslationBlock *tb, unsigned int page_addr) +{ + PageDesc *p; + unsigned int h; + target_ulong phys_pc; + + /* remove the TB from the hash list */ + phys_pc = tb->page_addr[0] + (tb->pc & ~TARGET_PAGE_MASK); + h = tb_phys_hash_func(phys_pc); + tb_remove(&tb_phys_hash[h], tb, + offsetof(TranslationBlock, phys_hash_next)); + + /* remove the TB from the page list */ + if (tb->page_addr[0] != page_addr) { + p = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS); + tb_page_remove(&p->first_tb, tb); + invalidate_page_bitmap(p); + } + if (tb->page_addr[1] != -1 && tb->page_addr[1] != page_addr) { + p = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS); + tb_page_remove(&p->first_tb, tb); + invalidate_page_bitmap(p); + } + tb_invalidate(tb); tb_phys_invalidate_count++; } @@ -672,19 +810,12 @@ #endif } #endif /* TARGET_HAS_PRECISE_SMC */ - /* we need to do that to handle the case where a signal - occurs while doing tb_phys_invalidate() */ - saved_tb = NULL; - if (env) { - saved_tb = env->current_tb; - env->current_tb = NULL; - } + saved_tb = env->current_tb; + env->current_tb = NULL; tb_phys_invalidate(tb, -1); - if (env) { - env->current_tb = saved_tb; - if (env->interrupt_request && env->current_tb) - cpu_interrupt(env, env->interrupt_request); - } + env->current_tb = saved_tb; + if (env->interrupt_request && env->current_tb) + cpu_interrupt(env, env->interrupt_request); } tb = tb_next; } @@ -849,7 +980,10 @@ protected. So we handle the case where only the first TB is allocated in a physical page */ if (!last_first_tb) { - tlb_protect_code(page_addr); + target_ulong virt_addr; + + virt_addr = (tb->pc & TARGET_PAGE_MASK) + (n << TARGET_PAGE_BITS); + tlb_protect_code(cpu_single_env, page_addr, virt_addr); } #endif @@ -891,6 +1025,57 @@ tb_alloc_page(tb, 1, phys_page2); else tb->page_addr[1] = -1; +#ifdef DEBUG_TB_CHECK + tb_page_check(); +#endif +} + +/* link the tb with the other TBs */ +void tb_link(TranslationBlock *tb) +{ +#if !defined(CONFIG_USER_ONLY) + { + VirtPageDesc *vp; + target_ulong addr; + + /* save the code memory mappings (needed to invalidate the code) */ + addr = tb->pc & TARGET_PAGE_MASK; + vp = virt_page_find_alloc(addr >> TARGET_PAGE_BITS, 1); +#ifdef DEBUG_TLB_CHECK + if (vp->valid_tag == virt_valid_tag && + vp->phys_addr != tb->page_addr[0]) { + printf("Error tb addr=0x%x phys=0x%x vp->phys_addr=0x%x\n", + addr, tb->page_addr[0], vp->phys_addr); + } +#endif + vp->phys_addr = tb->page_addr[0]; + if (vp->valid_tag != virt_valid_tag) { + vp->valid_tag = virt_valid_tag; +#if !defined(CONFIG_SOFTMMU) + vp->prot = 0; +#endif + } + + if (tb->page_addr[1] != -1) { + addr += TARGET_PAGE_SIZE; + vp = virt_page_find_alloc(addr >> TARGET_PAGE_BITS, 1); +#ifdef DEBUG_TLB_CHECK + if (vp->valid_tag == virt_valid_tag && + vp->phys_addr != tb->page_addr[1]) { + printf("Error tb addr=0x%x phys=0x%x vp->phys_addr=0x%x\n", + addr, tb->page_addr[1], vp->phys_addr); + } +#endif + vp->phys_addr = tb->page_addr[1]; + if (vp->valid_tag != virt_valid_tag) { + vp->valid_tag = virt_valid_tag; +#if !defined(CONFIG_SOFTMMU) + vp->prot = 0; +#endif + } + } + } +#endif tb->jmp_first = (TranslationBlock *)((long)tb | 2); tb->jmp_next[0] = NULL; @@ -906,10 +1091,6 @@ tb_reset_jump(tb, 0); if (tb->tb_next_offset[1] != 0xffff) tb_reset_jump(tb, 1); - -#ifdef DEBUG_TB_CHECK - tb_page_check(); -#endif } /* find the TB 'tb' such that tb[0].tc_ptr <= tc_ptr < @@ -1209,15 +1390,14 @@ env->current_tb = NULL; for(i = 0; i < CPU_TLB_SIZE; i++) { - env->tlb_table[0][i].addr_read = -1; - env->tlb_table[0][i].addr_write = -1; - env->tlb_table[0][i].addr_code = -1; - env->tlb_table[1][i].addr_read = -1; - env->tlb_table[1][i].addr_write = -1; - env->tlb_table[1][i].addr_code = -1; + env->tlb_read[0][i].address = -1; + env->tlb_write[0][i].address = -1; + env->tlb_read[1][i].address = -1; + env->tlb_write[1][i].address = -1; } - memset (env->tb_jmp_cache, 0, TB_JMP_CACHE_SIZE * sizeof (void *)); + virt_page_flush(); + memset (tb_hash, 0, CODE_GEN_HASH_SIZE * sizeof (void *)); #if !defined(CONFIG_SOFTMMU) munmap((void *)MMAP_AREA_START, MMAP_AREA_END - MMAP_AREA_START); @@ -1232,21 +1412,16 @@ static inline void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong addr) { - if (addr == (tlb_entry->addr_read & - (TARGET_PAGE_MASK | TLB_INVALID_MASK)) || - addr == (tlb_entry->addr_write & - (TARGET_PAGE_MASK | TLB_INVALID_MASK)) || - addr == (tlb_entry->addr_code & - (TARGET_PAGE_MASK | TLB_INVALID_MASK))) { - tlb_entry->addr_read = -1; - tlb_entry->addr_write = -1; - tlb_entry->addr_code = -1; - } + if (addr == (tlb_entry->address & + (TARGET_PAGE_MASK | TLB_INVALID_MASK))) + tlb_entry->address = -1; } void tlb_flush_page(CPUState *env, target_ulong addr) { - int i; + int i, n; + VirtPageDesc *vp; + PageDesc *p; TranslationBlock *tb; #if defined(DEBUG_TLB) @@ -1258,16 +1433,31 @@ addr &= TARGET_PAGE_MASK; i = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); - tlb_flush_entry(&env->tlb_table[0][i], addr); - tlb_flush_entry(&env->tlb_table[1][i], addr); + tlb_flush_entry(&env->tlb_read[0][i], addr); + tlb_flush_entry(&env->tlb_write[0][i], addr); + tlb_flush_entry(&env->tlb_read[1][i], addr); + tlb_flush_entry(&env->tlb_write[1][i], addr); - for(i = 0; i < TB_JMP_CACHE_SIZE; i++) { - tb = env->tb_jmp_cache[i]; - if (tb && - ((tb->pc & TARGET_PAGE_MASK) == addr || - ((tb->pc + tb->size - 1) & TARGET_PAGE_MASK) == addr)) { - env->tb_jmp_cache[i] = NULL; + /* remove from the virtual pc hash table all the TB at this + virtual address */ + + vp = virt_page_find(addr >> TARGET_PAGE_BITS); + if (vp && vp->valid_tag == virt_valid_tag) { + p = page_find(vp->phys_addr >> TARGET_PAGE_BITS); + if (p) { + /* we remove all the links to the TBs in this virtual page */ + tb = p->first_tb; + while (tb != NULL) { + n = (long)tb & 3; + tb = (TranslationBlock *)((long)tb & ~3); + if ((tb->pc & TARGET_PAGE_MASK) == addr || + ((tb->pc + tb->size - 1) & TARGET_PAGE_MASK) == addr) { + tb_invalidate(tb); + } + tb = tb->page_next[n]; + } } + vp->valid_tag = 0; } #if !defined(CONFIG_SOFTMMU) @@ -1281,13 +1471,40 @@ #endif } +static inline void tlb_protect_code1(CPUTLBEntry *tlb_entry, target_ulong addr) +{ + if (addr == (tlb_entry->address & + (TARGET_PAGE_MASK | TLB_INVALID_MASK)) && + (tlb_entry->address & ~TARGET_PAGE_MASK) == IO_MEM_RAM) { + tlb_entry->address = (tlb_entry->address & TARGET_PAGE_MASK) | IO_MEM_NOTDIRTY; + } +} + /* update the TLBs so that writes to code in the virtual page 'addr' can be detected */ -static void tlb_protect_code(ram_addr_t ram_addr) +static void tlb_protect_code(CPUState *env, ram_addr_t ram_addr, + target_ulong vaddr) { - cpu_physical_memory_reset_dirty(ram_addr, - ram_addr + TARGET_PAGE_SIZE, - CODE_DIRTY_FLAG); + int i; + + vaddr &= TARGET_PAGE_MASK; + i = (vaddr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); + tlb_protect_code1(&env->tlb_write[0][i], vaddr); + tlb_protect_code1(&env->tlb_write[1][i], vaddr); + +#ifdef USE_KQEMU + if (env->kqemu_enabled) { + kqemu_set_notdirty(env, ram_addr); + } +#endif + phys_ram_dirty[ram_addr >> TARGET_PAGE_BITS] &= ~CODE_DIRTY_FLAG; + +#if !defined(CONFIG_SOFTMMU) + /* NOTE: as we generated the code for this page, it is already at + least readable */ + if (vaddr < MMAP_AREA_END) + mprotect((void *)vaddr, TARGET_PAGE_SIZE, PROT_READ); +#endif } /* update the TLB so that writes in physical page 'phys_addr' are no longer @@ -1302,10 +1519,10 @@ unsigned long start, unsigned long length) { unsigned long addr; - if ((tlb_entry->addr_write & ~TARGET_PAGE_MASK) == IO_MEM_RAM) { - addr = (tlb_entry->addr_write & TARGET_PAGE_MASK) + tlb_entry->addend; + if ((tlb_entry->address & ~TARGET_PAGE_MASK) == IO_MEM_RAM) { + addr = (tlb_entry->address & TARGET_PAGE_MASK) + tlb_entry->addend; if ((addr - start) < length) { - tlb_entry->addr_write = (tlb_entry->addr_write & TARGET_PAGE_MASK) | IO_MEM_NOTDIRTY; + tlb_entry->address = (tlb_entry->address & TARGET_PAGE_MASK) | IO_MEM_NOTDIRTY; } } } @@ -1325,9 +1542,8 @@ if (length == 0) return; len = length >> TARGET_PAGE_BITS; + env = cpu_single_env; #ifdef USE_KQEMU - /* XXX: should not depend on cpu context */ - env = first_cpu; if (env->kqemu_enabled) { ram_addr_t addr; addr = start; @@ -1345,12 +1561,10 @@ /* we modify the TLB cache so that the dirty bit will be set again when accessing the range */ start1 = start + (unsigned long)phys_ram_base; - for(env = first_cpu; env != NULL; env = env->next_cpu) { - for(i = 0; i < CPU_TLB_SIZE; i++) - tlb_reset_dirty_range(&env->tlb_table[0][i], start1, length); - for(i = 0; i < CPU_TLB_SIZE; i++) - tlb_reset_dirty_range(&env->tlb_table[1][i], start1, length); - } + for(i = 0; i < CPU_TLB_SIZE; i++) + tlb_reset_dirty_range(&env->tlb_write[0][i], start1, length); + for(i = 0; i < CPU_TLB_SIZE; i++) + tlb_reset_dirty_range(&env->tlb_write[1][i], start1, length); #if !defined(CONFIG_SOFTMMU) /* XXX: this is expensive */ @@ -1385,11 +1599,11 @@ { ram_addr_t ram_addr; - if ((tlb_entry->addr_write & ~TARGET_PAGE_MASK) == IO_MEM_RAM) { - ram_addr = (tlb_entry->addr_write & TARGET_PAGE_MASK) + + if ((tlb_entry->address & ~TARGET_PAGE_MASK) == IO_MEM_RAM) { + ram_addr = (tlb_entry->address & TARGET_PAGE_MASK) + tlb_entry->addend - (unsigned long)phys_ram_base; if (!cpu_physical_memory_is_dirty(ram_addr)) { - tlb_entry->addr_write |= IO_MEM_NOTDIRTY; + tlb_entry->address |= IO_MEM_NOTDIRTY; } } } @@ -1399,43 +1613,43 @@ { int i; for(i = 0; i < CPU_TLB_SIZE; i++) - tlb_update_dirty(&env->tlb_table[0][i]); + tlb_update_dirty(&env->tlb_write[0][i]); for(i = 0; i < CPU_TLB_SIZE; i++) - tlb_update_dirty(&env->tlb_table[1][i]); + tlb_update_dirty(&env->tlb_write[1][i]); } static inline void tlb_set_dirty1(CPUTLBEntry *tlb_entry, unsigned long start) { unsigned long addr; - if ((tlb_entry->addr_write & ~TARGET_PAGE_MASK) == IO_MEM_NOTDIRTY) { - addr = (tlb_entry->addr_write & TARGET_PAGE_MASK) + tlb_entry->addend; + if ((tlb_entry->address & ~TARGET_PAGE_MASK) == IO_MEM_NOTDIRTY) { + addr = (tlb_entry->address & TARGET_PAGE_MASK) + tlb_entry->addend; if (addr == start) { - tlb_entry->addr_write = (tlb_entry->addr_write & TARGET_PAGE_MASK) | IO_MEM_RAM; + tlb_entry->address = (tlb_entry->address & TARGET_PAGE_MASK) | IO_MEM_RAM; } } } /* update the TLB corresponding to virtual page vaddr and phys addr addr so that it is no longer dirty */ -static inline void tlb_set_dirty(CPUState *env, - unsigned long addr, target_ulong vaddr) +static inline void tlb_set_dirty(unsigned long addr, target_ulong vaddr) { + CPUState *env = cpu_single_env; int i; addr &= TARGET_PAGE_MASK; i = (vaddr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); - tlb_set_dirty1(&env->tlb_table[0][i], addr); - tlb_set_dirty1(&env->tlb_table[1][i], addr); + tlb_set_dirty1(&env->tlb_write[0][i], addr); + tlb_set_dirty1(&env->tlb_write[1][i], addr); } /* add a new TLB entry. At most one entry for a given virtual address is permitted. Return 0 if OK or 2 if the page could not be mapped (can only happen in non SOFTMMU mode for I/O pages or pages conflicting with the host address space). */ -int tlb_set_page_exec(CPUState *env, target_ulong vaddr, - target_phys_addr_t paddr, int prot, - int is_user, int is_softmmu) +int tlb_set_page(CPUState *env, target_ulong vaddr, + target_phys_addr_t paddr, int prot, + int is_user, int is_softmmu) { PhysPageDesc *p; unsigned long pd; @@ -1443,7 +1657,6 @@ target_ulong address; target_phys_addr_t addend; int ret; - CPUTLBEntry *te; p = phys_page_find(paddr >> TARGET_PAGE_BITS); if (!p) { @@ -1453,7 +1666,7 @@ } #if defined(DEBUG_TLB) printf("tlb_set_page: vaddr=" TARGET_FMT_lx " paddr=0x%08x prot=%x u=%d smmu=%d pd=0x%08lx\n", - vaddr, (int)paddr, prot, is_user, is_softmmu, pd); + vaddr, paddr, prot, is_user, is_softmmu, pd); #endif ret = 0; @@ -1473,30 +1686,29 @@ index = (vaddr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); addend -= vaddr; - te = &env->tlb_table[is_user][index]; - te->addend = addend; if (prot & PAGE_READ) { - te->addr_read = address; - } else { - te->addr_read = -1; - } - if (prot & PAGE_EXEC) { - te->addr_code = address; + env->tlb_read[is_user][index].address = address; + env->tlb_read[is_user][index].addend = addend; } else { - te->addr_code = -1; + env->tlb_read[is_user][index].address = -1; + env->tlb_read[is_user][index].addend = -1; } if (prot & PAGE_WRITE) { if ((pd & ~TARGET_PAGE_MASK) == IO_MEM_ROM) { /* ROM: access is ignored (same as unassigned) */ - te->addr_write = vaddr | IO_MEM_ROM; + env->tlb_write[is_user][index].address = vaddr | IO_MEM_ROM; + env->tlb_write[is_user][index].addend = addend; } else if ((pd & ~TARGET_PAGE_MASK) == IO_MEM_RAM && !cpu_physical_memory_is_dirty(pd)) { - te->addr_write = vaddr | IO_MEM_NOTDIRTY; + env->tlb_write[is_user][index].address = vaddr | IO_MEM_NOTDIRTY; + env->tlb_write[is_user][index].addend = addend; } else { - te->addr_write = address; + env->tlb_write[is_user][index].address = address; + env->tlb_write[is_user][index].addend = addend; } } else { - te->addr_write = -1; + env->tlb_write[is_user][index].address = -1; + env->tlb_write[is_user][index].addend = -1; } } #if !defined(CONFIG_SOFTMMU) @@ -1595,9 +1807,9 @@ { } -int tlb_set_page_exec(CPUState *env, target_ulong vaddr, - target_phys_addr_t paddr, int prot, - int is_user, int is_softmmu) +int tlb_set_page(CPUState *env, target_ulong vaddr, + target_phys_addr_t paddr, int prot, + int is_user, int is_softmmu) { return 0; } @@ -1736,8 +1948,7 @@ } } -static inline void tlb_set_dirty(CPUState *env, - unsigned long addr, target_ulong vaddr) +static inline void tlb_set_dirty(unsigned long addr, target_ulong vaddr) { } #endif /* defined(CONFIG_USER_ONLY) */ @@ -1801,7 +2012,7 @@ /* we remove the notdirty callback only if the code has been flushed */ if (dirty_flags == 0xff) - tlb_set_dirty(cpu_single_env, addr, cpu_single_env->mem_write_vaddr); + tlb_set_dirty(addr, cpu_single_env->mem_write_vaddr); } static void notdirty_mem_writew(void *opaque, target_phys_addr_t addr, uint32_t val) @@ -1822,7 +2033,7 @@ /* we remove the notdirty callback only if the code has been flushed */ if (dirty_flags == 0xff) - tlb_set_dirty(cpu_single_env, addr, cpu_single_env->mem_write_vaddr); + tlb_set_dirty(addr, cpu_single_env->mem_write_vaddr); } static void notdirty_mem_writel(void *opaque, target_phys_addr_t addr, uint32_t val) @@ -1843,7 +2054,7 @@ /* we remove the notdirty callback only if the code has been flushed */ if (dirty_flags == 0xff) - tlb_set_dirty(cpu_single_env, addr, cpu_single_env->mem_write_vaddr); + tlb_set_dirty(addr, cpu_single_env->mem_write_vaddr); } static CPUReadMemoryFunc *error_mem_read[3] = { @@ -1884,14 +2095,14 @@ int i; if (io_index <= 0) { - if (io_mem_nb >= IO_MEM_NB_ENTRIES) + if (io_index >= IO_MEM_NB_ENTRIES) return -1; io_index = io_mem_nb++; } else { if (io_index >= IO_MEM_NB_ENTRIES) return -1; } - + for(i = 0;i < 3; i++) { io_mem_read[io_index][i] = mem_read[i]; io_mem_write[io_index][i] = mem_write[i]; @@ -1941,6 +2152,20 @@ } } +/* never used */ +uint32_t ldl_phys(target_phys_addr_t addr) +{ + return 0; +} + +void stl_phys_notdirty(target_phys_addr_t addr, uint32_t val) +{ +} + +void stl_phys(target_phys_addr_t addr, uint32_t val) +{ +} + #else void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, int len, int is_write) @@ -1967,8 +2192,6 @@ if (is_write) { if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) { io_index = (pd >> IO_MEM_SHIFT) & (IO_MEM_NB_ENTRIES - 1); - /* XXX: could force cpu_single_env to NULL to avoid - potential bugs */ if (l >= 4 && ((addr & 3) == 0)) { /* 32 bit write access */ val = ldl_p(buf); @@ -2061,57 +2284,6 @@ return val; } -/* warning: addr must be aligned */ -uint64_t ldq_phys(target_phys_addr_t addr) -{ - int io_index; - uint8_t *ptr; - uint64_t val; - unsigned long pd; - PhysPageDesc *p; - - p = phys_page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - pd = IO_MEM_UNASSIGNED; - } else { - pd = p->phys_offset; - } - - if ((pd & ~TARGET_PAGE_MASK) > IO_MEM_ROM) { - /* I/O case */ - io_index = (pd >> IO_MEM_SHIFT) & (IO_MEM_NB_ENTRIES - 1); -#ifdef TARGET_WORDS_BIGENDIAN - val = (uint64_t)io_mem_read[io_index][2](io_mem_opaque[io_index], addr) << 32; - val |= io_mem_read[io_index][2](io_mem_opaque[io_index], addr + 4); -#else - val = io_mem_read[io_index][2](io_mem_opaque[io_index], addr); - val |= (uint64_t)io_mem_read[io_index][2](io_mem_opaque[io_index], addr + 4) << 32; -#endif - } else { - /* RAM case */ - ptr = phys_ram_base + (pd & TARGET_PAGE_MASK) + - (addr & ~TARGET_PAGE_MASK); - val = ldq_p(ptr); - } - return val; -} - -/* XXX: optimize */ -uint32_t ldub_phys(target_phys_addr_t addr) -{ - uint8_t val; - cpu_physical_memory_read(addr, &val, 1); - return val; -} - -/* XXX: optimize */ -uint32_t lduw_phys(target_phys_addr_t addr) -{ - uint16_t val; - cpu_physical_memory_read(addr, (uint8_t *)&val, 2); - return tswap16(val); -} - /* warning: addr must be aligned. The ram page is not masked as dirty and the code inside is not invalidated. It is useful if the dirty bits are used to track modified PTEs */ @@ -2173,27 +2345,6 @@ } } -/* XXX: optimize */ -void stb_phys(target_phys_addr_t addr, uint32_t val) -{ - uint8_t v = val; - cpu_physical_memory_write(addr, &v, 1); -} - -/* XXX: optimize */ -void stw_phys(target_phys_addr_t addr, uint32_t val) -{ - uint16_t v = tswap16(val); - cpu_physical_memory_write(addr, (const uint8_t *)&v, 2); -} - -/* XXX: optimize */ -void stq_phys(target_phys_addr_t addr, uint64_t val) -{ - val = tswap64(val); - cpu_physical_memory_write(addr, (const uint8_t *)&val, 8); -} - #endif /* virtual memory access for debug */ --Apple-Mail-1--664408552 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII; format=flowed --Apple-Mail-1--664408552--