* [Qemu-devel] [PATCH qom-cpu for-next 0/2] QOM CPUState, part 12: CPU loops, revisited
@ 2013-07-30 16:55 Andreas Färber
2013-07-30 16:55 ` [Qemu-devel] " Andreas Färber
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: Andreas Färber @ 2013-07-30 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Eduardo Habkost, Michael S. Tsirkin, Riku Voipio,
Markus Armbruster, Blue Swirl, Anthony Liguori, Igor Mammedov,
Paolo Bonzini, Andreas Färber, Aurélien Jarno
Hello everyone,
A short CPUState series for a change: This mini-series replaces our home-grown
CPU list (first_cpu global and CPUState::next_cpu) with a QTAILQ.
To avoid repeated QTAILQ_FOREACH(foo, cpus, node) I am proposing a wrapper
CPU_FOREACH(foo) to hide the implementation details, as requested by mst,
while keeping inline loops, as requested by Markus.
The bulk of patch 1 is mechanical for/while loop conversions, except for:
exec.c: QTAILQ_INSERT_TAIL()
linux-user/syscall.c: QTAILQ_REMOVE() -- untested!
Having such a stable API will help with vCPU hot-unplug.
Patch 2 is controversial: It drops qemu_for_each_cpu() on Markus' request.
I'm including it as RFC to showcase what exactly those suggestions involve.
Converting from for (cpu = ...) to CPU_FOREACH(cpu) is obviously much more
straightforward than converting between function and loop, therefore with
patch 1 in mind, it seems best to still use for loops, which since recently
no longer require CPUArchState, the original trigger for qemu_for_each_cpu().
Available for testing at:
git://github.com/afaerber/qemu-cpu.git qom-cpu-12.v1
https://github.com/afaerber/qemu-cpu/commits/qom-cpu-12.v1
Regards,
Andreas
>From CPUState part 10 v1:
* Instead of replacing every loop with qemu_for_each_cpu()
replace loops with CPU_FOREACH() macro, based on QTAILQ_FOREACH().
Cc: Markus Armbruster <armbru@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Anthony Liguori <anthony@codemonkey.ws>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Aurélien Jarno <aurelien@aurel32.net>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Riku Voipio <riku.voipio@iki.fi>
Andreas Färber (2):
cpu: Use QTAILQ for CPU list
cpu: Replace qemu_for_each_cpu()
arch_init.c | 11 ++++-----
cpus.c | 58 ++++++++++++++++++++---------------------------
cputlb.c | 2 +-
dump.c | 10 ++++----
exec.c | 33 ++++++++-------------------
gdbstub.c | 14 ++++++------
hw/acpi/piix4.c | 20 ++++++++--------
hw/arm/boot.c | 2 +-
hw/i386/kvm/clock.c | 2 +-
hw/i386/kvmvapic.c | 2 +-
hw/i386/pc.c | 3 +--
hw/ppc/e500.c | 2 +-
hw/ppc/ppc.c | 2 +-
hw/ppc/spapr.c | 4 ++--
include/qom/cpu.h | 20 ++++++++--------
kvm-all.c | 8 +++----
linux-user/elfload.c | 2 +-
linux-user/main.c | 10 +++++---
linux-user/syscall.c | 17 ++------------
memory_mapping.c | 5 ++--
monitor.c | 2 +-
qom/cpu.c | 30 ++++++++----------------
target-i386/helper.c | 3 +--
target-i386/misc_helper.c | 2 +-
target-mips/op_helper.c | 10 ++++----
target-ppc/excp_helper.c | 2 +-
translate-all.c | 4 ++--
27 files changed, 113 insertions(+), 167 deletions(-)
--
1.8.1.4
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH qom-cpu for-next 1/2] cpu: Use QTAILQ for CPU list
2013-07-30 16:55 [Qemu-devel] [PATCH qom-cpu for-next 0/2] QOM CPUState, part 12: CPU loops, revisited Andreas Färber
@ 2013-07-30 16:55 ` Andreas Färber
2013-07-30 16:55 ` [Qemu-devel] [RFC qom-cpu for-next 2/2] cpu: Replace qemu_for_each_cpu() Andreas Färber
2013-08-14 11:49 ` [Qemu-devel] [PATCH qom-cpu for-next 0/2] QOM CPUState, part 12: CPU loops, revisited Andreas Färber
2 siblings, 0 replies; 13+ messages in thread
From: Andreas Färber @ 2013-07-30 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Andreas Färber, Paul Brook, Peter Maydell, Anthony Liguori,
Alexander Graf, Scott Wood, Gleb Natapov, Paolo Bonzini,
Riku Voipio, Luiz Capitulino, Aurelien Jarno, open list:e500,
open list:Overall
Introduce CPU_FOREACH(), CPU_FOREACH_SAFE() and CPU_NEXT() shorthand
macros.
Signed-off-by: Andreas Färber <afaerber@suse.de>
---
cpus.c | 47 ++++++++++++++++++++---------------------------
cputlb.c | 2 +-
dump.c | 10 +++++-----
exec.c | 26 ++++++++++----------------
gdbstub.c | 14 +++++++-------
hw/arm/boot.c | 2 +-
hw/i386/kvm/clock.c | 2 +-
hw/i386/kvmvapic.c | 2 +-
hw/i386/pc.c | 3 +--
hw/ppc/e500.c | 2 +-
hw/ppc/ppc.c | 2 +-
hw/ppc/spapr.c | 4 ++--
include/qom/cpu.h | 11 +++++++++--
kvm-all.c | 8 ++++----
linux-user/elfload.c | 2 +-
linux-user/main.c | 10 +++++++---
linux-user/syscall.c | 17 ++---------------
memory_mapping.c | 5 +++--
monitor.c | 2 +-
target-i386/helper.c | 3 +--
target-i386/misc_helper.c | 2 +-
target-mips/op_helper.c | 10 ++++------
target-ppc/excp_helper.c | 2 +-
translate-all.c | 4 ++--
24 files changed, 87 insertions(+), 105 deletions(-)
diff --git a/cpus.c b/cpus.c
index 0f65e76..1e2fd8a 100644
--- a/cpus.c
+++ b/cpus.c
@@ -81,7 +81,7 @@ static bool all_cpu_threads_idle(void)
{
CPUState *cpu;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
if (!cpu_thread_is_idle(cpu)) {
return false;
}
@@ -394,7 +394,7 @@ void hw_error(const char *fmt, ...)
fprintf(stderr, "qemu: hardware error: ");
vfprintf(stderr, fmt, ap);
fprintf(stderr, "\n");
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
fprintf(stderr, "CPU #%d:\n", cpu->cpu_index);
cpu_dump_state(cpu, stderr, fprintf, CPU_DUMP_FPU);
}
@@ -406,7 +406,7 @@ void cpu_synchronize_all_states(void)
{
CPUState *cpu;
- for (cpu = first_cpu; cpu; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
cpu_synchronize_state(cpu);
}
}
@@ -415,7 +415,7 @@ void cpu_synchronize_all_post_reset(void)
{
CPUState *cpu;
- for (cpu = first_cpu; cpu; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
cpu_synchronize_post_reset(cpu);
}
}
@@ -424,7 +424,7 @@ void cpu_synchronize_all_post_init(void)
{
CPUState *cpu;
- for (cpu = first_cpu; cpu; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
cpu_synchronize_post_init(cpu);
}
}
@@ -743,7 +743,7 @@ static void qemu_tcg_wait_io_event(void)
qemu_cond_wait(&qemu_io_proceeded_cond, &qemu_global_mutex);
}
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
qemu_wait_io_event_common(cpu);
}
}
@@ -855,11 +855,11 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
qemu_cond_signal(&qemu_cpu_cond);
/* wait for initial kick-off after machine start */
- while (first_cpu->stopped) {
+ while (QTAILQ_FIRST(&cpus)->stopped) {
qemu_cond_wait(tcg_halt_cond, &qemu_global_mutex);
/* process any pending work */
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
qemu_wait_io_event_common(cpu);
}
}
@@ -969,13 +969,12 @@ void qemu_mutex_unlock_iothread(void)
static int all_vcpus_paused(void)
{
- CPUState *cpu = first_cpu;
+ CPUState *cpu;
- while (cpu) {
+ CPU_FOREACH(cpu) {
if (!cpu->stopped) {
return 0;
}
- cpu = cpu->next_cpu;
}
return 1;
@@ -983,23 +982,20 @@ static int all_vcpus_paused(void)
void pause_all_vcpus(void)
{
- CPUState *cpu = first_cpu;
+ CPUState *cpu;
qemu_clock_enable(vm_clock, false);
- while (cpu) {
+ CPU_FOREACH(cpu) {
cpu->stop = true;
qemu_cpu_kick(cpu);
- cpu = cpu->next_cpu;
}
if (qemu_in_vcpu_thread()) {
cpu_stop_current();
if (!kvm_enabled()) {
- cpu = first_cpu;
- while (cpu) {
+ CPU_FOREACH(cpu) {
cpu->stop = false;
cpu->stopped = true;
- cpu = cpu->next_cpu;
}
return;
}
@@ -1007,10 +1003,8 @@ void pause_all_vcpus(void)
while (!all_vcpus_paused()) {
qemu_cond_wait(&qemu_pause_cond, &qemu_global_mutex);
- cpu = first_cpu;
- while (cpu) {
+ CPU_FOREACH(cpu) {
qemu_cpu_kick(cpu);
- cpu = cpu->next_cpu;
}
}
}
@@ -1024,12 +1018,11 @@ void cpu_resume(CPUState *cpu)
void resume_all_vcpus(void)
{
- CPUState *cpu = first_cpu;
+ CPUState *cpu;
qemu_clock_enable(vm_clock, true);
- while (cpu) {
+ CPU_FOREACH(cpu) {
cpu_resume(cpu);
- cpu = cpu->next_cpu;
}
}
@@ -1181,7 +1174,7 @@ static void tcg_exec_all(void)
if (next_cpu == NULL) {
next_cpu = first_cpu;
}
- for (; next_cpu != NULL && !exit_request; next_cpu = next_cpu->next_cpu) {
+ for (; next_cpu != NULL && !exit_request; next_cpu = CPU_NEXT(next_cpu)) {
CPUState *cpu = next_cpu;
CPUArchState *env = cpu->env_ptr;
@@ -1206,7 +1199,7 @@ void set_numa_modes(void)
CPUState *cpu;
int i;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
for (i = 0; i < nb_numa_nodes; i++) {
if (test_bit(cpu->cpu_index, node_cpumask[i])) {
cpu->numa_node = i;
@@ -1228,7 +1221,7 @@ CpuInfoList *qmp_query_cpus(Error **errp)
CpuInfoList *head = NULL, *cur_item = NULL;
CPUState *cpu;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
CpuInfoList *info;
#if defined(TARGET_I386)
X86CPU *x86_cpu = X86_CPU(cpu);
@@ -1357,7 +1350,7 @@ void qmp_inject_nmi(Error **errp)
#if defined(TARGET_I386)
CPUState *cs;
- for (cs = first_cpu; cs != NULL; cs = cs->next_cpu) {
+ CPU_FOREACH(cs) {
X86CPU *cpu = X86_CPU(cs);
CPUX86State *env = &cpu->env;
diff --git a/cputlb.c b/cputlb.c
index 977c0ca..19ecf60 100644
--- a/cputlb.c
+++ b/cputlb.c
@@ -189,7 +189,7 @@ void cpu_tlb_reset_dirty_all(ram_addr_t start1, ram_addr_t length)
CPUState *cpu;
CPUArchState *env;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
int mmu_idx;
env = cpu->env_ptr;
diff --git a/dump.c b/dump.c
index 6a3a72a..dac97a3 100644
--- a/dump.c
+++ b/dump.c
@@ -279,7 +279,7 @@ static int write_elf64_notes(DumpState *s)
int ret;
int id;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
id = cpu_index(cpu);
ret = cpu_write_elf64_note(fd_write_vmcore, cpu, id, s);
if (ret < 0) {
@@ -288,7 +288,7 @@ static int write_elf64_notes(DumpState *s)
}
}
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
ret = cpu_write_elf64_qemunote(fd_write_vmcore, cpu, s);
if (ret < 0) {
dump_error(s, "dump: failed to write CPU status.\n");
@@ -329,7 +329,7 @@ static int write_elf32_notes(DumpState *s)
int ret;
int id;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
id = cpu_index(cpu);
ret = cpu_write_elf32_note(fd_write_vmcore, cpu, id, s);
if (ret < 0) {
@@ -338,7 +338,7 @@ static int write_elf32_notes(DumpState *s)
}
}
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
ret = cpu_write_elf32_qemunote(fd_write_vmcore, cpu, s);
if (ret < 0) {
dump_error(s, "dump: failed to write CPU status.\n");
@@ -734,7 +734,7 @@ static int dump_init(DumpState *s, int fd, bool paging, bool has_filter,
*/
cpu_synchronize_all_states();
nr_cpus = 0;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
nr_cpus++;
}
diff --git a/exec.c b/exec.c
index c4f2894..4ee61a6 100644
--- a/exec.c
+++ b/exec.c
@@ -69,7 +69,7 @@ static MemoryRegion io_mem_unassigned;
#endif
-CPUState *first_cpu;
+struct CPUTailQ cpus = QTAILQ_HEAD_INITIALIZER(cpus);
/* current CPU in the current thread. It is only valid inside
cpu_exec() */
DEFINE_TLS(CPUState *, current_cpu);
@@ -351,26 +351,23 @@ const VMStateDescription vmstate_cpu_common = {
CPUState *qemu_get_cpu(int index)
{
- CPUState *cpu = first_cpu;
+ CPUState *cpu;
- while (cpu) {
+ CPU_FOREACH(cpu) {
if (cpu->cpu_index == index) {
- break;
+ return cpu;
}
- cpu = cpu->next_cpu;
}
- return cpu;
+ return NULL;
}
void qemu_for_each_cpu(void (*func)(CPUState *cpu, void *data), void *data)
{
CPUState *cpu;
- cpu = first_cpu;
- while (cpu) {
+ CPU_FOREACH(cpu) {
func(cpu, data);
- cpu = cpu->next_cpu;
}
}
@@ -378,17 +375,14 @@ void cpu_exec_init(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
CPUClass *cc = CPU_GET_CLASS(cpu);
- CPUState **pcpu;
+ CPUState *some_cpu;
int cpu_index;
#if defined(CONFIG_USER_ONLY)
cpu_list_lock();
#endif
- cpu->next_cpu = NULL;
- pcpu = &first_cpu;
cpu_index = 0;
- while (*pcpu != NULL) {
- pcpu = &(*pcpu)->next_cpu;
+ CPU_FOREACH(some_cpu) {
cpu_index++;
}
cpu->cpu_index = cpu_index;
@@ -398,7 +392,7 @@ void cpu_exec_init(CPUArchState *env)
#ifndef CONFIG_USER_ONLY
cpu->thread_id = qemu_get_thread_id();
#endif
- *pcpu = cpu;
+ QTAILQ_INSERT_TAIL(&cpus, cpu, node);
#if defined(CONFIG_USER_ONLY)
cpu_list_unlock();
#endif
@@ -1759,7 +1753,7 @@ static void tcg_commit(MemoryListener *listener)
/* since each CPU stores ram addresses in its TLB cache, we must
reset the modified entries */
/* XXX: slow ! */
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
CPUArchState *env = cpu->env_ptr;
tlb_flush(env, 1);
diff --git a/gdbstub.c b/gdbstub.c
index 1af25a6..ae53df0 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -646,7 +646,7 @@ static int gdb_breakpoint_insert(target_ulong addr, target_ulong len, int type)
switch (type) {
case GDB_BREAKPOINT_SW:
case GDB_BREAKPOINT_HW:
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
env = cpu->env_ptr;
err = cpu_breakpoint_insert(env, addr, BP_GDB, NULL);
if (err)
@@ -657,7 +657,7 @@ static int gdb_breakpoint_insert(target_ulong addr, target_ulong len, int type)
case GDB_WATCHPOINT_WRITE:
case GDB_WATCHPOINT_READ:
case GDB_WATCHPOINT_ACCESS:
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
env = cpu->env_ptr;
err = cpu_watchpoint_insert(env, addr, len, xlat_gdb_type[type],
NULL);
@@ -684,7 +684,7 @@ static int gdb_breakpoint_remove(target_ulong addr, target_ulong len, int type)
switch (type) {
case GDB_BREAKPOINT_SW:
case GDB_BREAKPOINT_HW:
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
env = cpu->env_ptr;
err = cpu_breakpoint_remove(env, addr, BP_GDB);
if (err)
@@ -695,7 +695,7 @@ static int gdb_breakpoint_remove(target_ulong addr, target_ulong len, int type)
case GDB_WATCHPOINT_WRITE:
case GDB_WATCHPOINT_READ:
case GDB_WATCHPOINT_ACCESS:
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
env = cpu->env_ptr;
err = cpu_watchpoint_remove(env, addr, len, xlat_gdb_type[type]);
if (err)
@@ -718,7 +718,7 @@ static void gdb_breakpoint_remove_all(void)
return;
}
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
env = cpu->env_ptr;
cpu_breakpoint_remove_all(env, BP_GDB);
#ifndef CONFIG_USER_ONLY
@@ -742,7 +742,7 @@ static CPUState *find_cpu(uint32_t thread_id)
{
CPUState *cpu;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
if (cpu_index(cpu) == thread_id) {
return cpu;
}
@@ -1068,7 +1068,7 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
if (s->query_cpu) {
snprintf(buf, sizeof(buf), "m%x", cpu_index(s->query_cpu));
put_packet(s, buf);
- s->query_cpu = s->query_cpu->next_cpu;
+ s->query_cpu = CPU_NEXT(s->query_cpu);
} else
put_packet(s, "l");
break;
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
index 2cbeefd..1e313af 100644
--- a/hw/arm/boot.c
+++ b/hw/arm/boot.c
@@ -468,7 +468,7 @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
}
info->is_linux = is_linux;
- for (; cs; cs = cs->next_cpu) {
+ for (; cs; cs = CPU_NEXT(cs)) {
cpu = ARM_CPU(cs);
cpu->env.boot_info = info;
qemu_register_reset(do_cpu_reset, cpu);
diff --git a/hw/i386/kvm/clock.c b/hw/i386/kvm/clock.c
index e89e2f7..92aabb8 100644
--- a/hw/i386/kvm/clock.c
+++ b/hw/i386/kvm/clock.c
@@ -59,7 +59,7 @@ static void kvmclock_vm_state_change(void *opaque, int running,
if (!cap_clock_ctrl) {
return;
}
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
ret = kvm_vcpu_ioctl(cpu, KVM_KVMCLOCK_CTRL, 0);
if (ret) {
if (ret != -EINVAL) {
diff --git a/hw/i386/kvmvapic.c b/hw/i386/kvmvapic.c
index a4506bc..cb860d5 100644
--- a/hw/i386/kvmvapic.c
+++ b/hw/i386/kvmvapic.c
@@ -498,7 +498,7 @@ static void vapic_enable_tpr_reporting(bool enable)
X86CPU *cpu;
CPUX86State *env;
- for (cs = first_cpu; cs != NULL; cs = cs->next_cpu) {
+ CPU_FOREACH(cs) {
cpu = X86_CPU(cs);
env = &cpu->env;
info.apic = env->apic_state;
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 2a87563..22f62c5 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -192,13 +192,12 @@ static void pic_irq_request(void *opaque, int irq, int level)
DPRINTF("pic_irqs: %s irq %d\n", level? "raise" : "lower", irq);
if (env->apic_state) {
- while (cs) {
+ CPU_FOREACH(cs) {
cpu = X86_CPU(cs);
env = &cpu->env;
if (apic_accept_pic_intr(env->apic_state)) {
apic_deliver_pic_intr(env->apic_state, level);
}
- cs = cs->next_cpu;
}
} else {
if (level) {
diff --git a/hw/ppc/e500.c b/hw/ppc/e500.c
index f00a62a..c2876fe 100644
--- a/hw/ppc/e500.c
+++ b/hw/ppc/e500.c
@@ -505,7 +505,7 @@ static DeviceState *ppce500_init_mpic_kvm(PPCE500Params *params,
return NULL;
}
- for (cs = first_cpu; cs != NULL; cs = cs->next_cpu) {
+ CPU_FOREACH(cs) {
if (kvm_openpic_connect_vcpu(dev, cs)) {
fprintf(stderr, "%s: failed to connect vcpu to irqchip\n",
__func__);
diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index e1c095c..17f47e4 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -443,7 +443,7 @@ void ppce500_set_mpic_proxy(bool enabled)
{
CPUState *cs;
- for (cs = first_cpu; cs != NULL; cs = cs->next_cpu) {
+ CPU_FOREACH(cs) {
PowerPCCPU *cpu = POWERPC_CPU(cs);
cpu->env.mpic_proxy = enabled;
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 16bfab9..84a26c5 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -167,7 +167,7 @@ static int spapr_fixup_cpu_dt(void *fdt, sPAPREnvironment *spapr)
assert(spapr->cpu_model);
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
uint32_t associativity[] = {cpu_to_be32(0x5),
cpu_to_be32(0x0),
cpu_to_be32(0x0),
@@ -331,7 +331,7 @@ static void *spapr_create_fdt_skel(const char *cpu_model,
/* This is needed during FDT finalization */
spapr->cpu_model = g_strdup(modelname);
- for (cs = first_cpu; cs != NULL; cs = cs->next_cpu) {
+ CPU_FOREACH(cs) {
PowerPCCPU *cpu = POWERPC_CPU(cs);
CPUPPCState *env = &cpu->env;
PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cs);
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 0d6e95c..6d052ee 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -23,6 +23,7 @@
#include <signal.h>
#include "hw/qdev-core.h"
#include "exec/hwaddr.h"
+#include "qemu/queue.h"
#include "qemu/thread.h"
#include "qemu/tls.h"
#include "qemu/typedefs.h"
@@ -188,7 +189,7 @@ struct CPUState {
struct TranslationBlock *current_tb;
struct GDBRegisterState *gdb_regs;
int gdb_num_regs;
- CPUState *next_cpu;
+ QTAILQ_ENTRY(CPUState) node;
int kvm_fd;
bool kvm_vcpu_dirty;
@@ -200,7 +201,13 @@ struct CPUState {
uint32_t halted; /* used by alpha, cris, ppc TCG */
};
-extern CPUState *first_cpu;
+QTAILQ_HEAD(CPUTailQ, CPUState);
+extern struct CPUTailQ cpus;
+#define CPU_NEXT(cpu) QTAILQ_NEXT(cpu, node)
+#define CPU_FOREACH(cpu) QTAILQ_FOREACH(cpu, &cpus, node)
+#define CPU_FOREACH_SAFE(cpu, next_cpu) \
+ QTAILQ_FOREACH_SAFE(cpu, &cpus, node, next_cpu)
+#define first_cpu QTAILQ_FIRST(&cpus)
DECLARE_TLS(CPUState *, current_cpu);
#define current_cpu tls_var(current_cpu)
diff --git a/kvm-all.c b/kvm-all.c
index 716860f..52df2a4 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -1933,7 +1933,7 @@ int kvm_insert_breakpoint(CPUState *cpu, target_ulong addr,
}
}
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
err = kvm_update_guest_debug(cpu, 0);
if (err) {
return err;
@@ -1973,7 +1973,7 @@ int kvm_remove_breakpoint(CPUState *cpu, target_ulong addr,
}
}
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
err = kvm_update_guest_debug(cpu, 0);
if (err) {
return err;
@@ -1990,7 +1990,7 @@ void kvm_remove_all_breakpoints(CPUState *cpu)
QTAILQ_FOREACH_SAFE(bp, &s->kvm_sw_breakpoints, entry, next) {
if (kvm_arch_remove_sw_breakpoint(cpu, bp) != 0) {
/* Try harder to find a CPU that currently sees the breakpoint. */
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
if (kvm_arch_remove_sw_breakpoint(cpu, bp) == 0) {
break;
}
@@ -2001,7 +2001,7 @@ void kvm_remove_all_breakpoints(CPUState *cpu)
}
kvm_arch_remove_all_hw_breakpoints();
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
kvm_update_guest_debug(cpu, 0);
}
}
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index 7ce2eab..72d9270 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -2668,7 +2668,7 @@ static int fill_note_info(struct elf_note_info *info,
/* read and fill status of all threads */
cpu_list_lock();
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
if (cpu == thread_cpu) {
continue;
}
diff --git a/linux-user/main.c b/linux-user/main.c
index 03859bc..5c2f7b2 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -117,10 +117,14 @@ void fork_end(int child)
{
mmap_fork_end(child);
if (child) {
+ CPUState *cpu, *next_cpu;
/* Child processes created by fork() only have a single thread.
Discard information about the parent threads. */
- first_cpu = thread_cpu;
- first_cpu->next_cpu = NULL;
+ CPU_FOREACH_SAFE(cpu, next_cpu) {
+ if (cpu != thread_cpu) {
+ QTAILQ_REMOVE(&cpus, thread_cpu, node);
+ }
+ }
pending_cpus = 0;
pthread_mutex_init(&exclusive_lock, NULL);
pthread_mutex_init(&cpu_list_mutex, NULL);
@@ -154,7 +158,7 @@ static inline void start_exclusive(void)
pending_cpus = 1;
/* Make all other cpus stop executing. */
- for (other_cpu = first_cpu; other_cpu; other_cpu = other_cpu->next_cpu) {
+ CPU_FOREACH(other_cpu) {
if (other_cpu->running) {
pending_cpus++;
cpu_exit(other_cpu);
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index 3f6db4b..0044196 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -5113,25 +5113,12 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
Do thread termination if we have more then one thread. */
/* FIXME: This probably breaks if a signal arrives. We should probably
be disabling signals. */
- if (first_cpu->next_cpu) {
+ if (CPU_NEXT(first_cpu)) {
TaskState *ts;
- CPUState **lastp;
- CPUState *p;
cpu_list_lock();
- lastp = &first_cpu;
- p = first_cpu;
- while (p && p != cpu) {
- lastp = &p->next_cpu;
- p = p->next_cpu;
- }
- /* If we didn't find the CPU for this thread then something is
- horribly wrong. */
- if (!p) {
- abort();
- }
/* Remove the CPU from the list. */
- *lastp = p->next_cpu;
+ QTAILQ_REMOVE(&cpus, cpu, node);
cpu_list_unlock();
ts = ((CPUArchState *)cpu_env)->opaque;
if (ts->child_tidptr) {
diff --git a/memory_mapping.c b/memory_mapping.c
index 515a984..a50ca53 100644
--- a/memory_mapping.c
+++ b/memory_mapping.c
@@ -169,7 +169,7 @@ static CPUState *find_paging_enabled_cpu(CPUState *start_cpu)
{
CPUState *cpu;
- for (cpu = start_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
if (cpu_paging_enabled(cpu)) {
return cpu;
}
@@ -186,7 +186,8 @@ void qemu_get_guest_memory_mapping(MemoryMappingList *list, Error **errp)
first_paging_enabled_cpu = find_paging_enabled_cpu(first_cpu);
if (first_paging_enabled_cpu) {
- for (cpu = first_paging_enabled_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ for (cpu = first_paging_enabled_cpu; cpu != NULL;
+ cpu = CPU_NEXT(cpu)) {
Error *err = NULL;
cpu_get_memory_mapping(cpu, list, &err);
if (err) {
diff --git a/monitor.c b/monitor.c
index 5dc0aa9..2ac1ce3 100644
--- a/monitor.c
+++ b/monitor.c
@@ -1819,7 +1819,7 @@ static void do_info_numa(Monitor *mon, const QDict *qdict)
monitor_printf(mon, "%d nodes\n", nb_numa_nodes);
for (i = 0; i < nb_numa_nodes; i++) {
monitor_printf(mon, "node %d cpus:", i);
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
if (cpu->numa_node == i) {
monitor_printf(mon, " %d", cpu->cpu_index);
}
diff --git a/target-i386/helper.c b/target-i386/helper.c
index bf3e2ac..7c58e27 100644
--- a/target-i386/helper.c
+++ b/target-i386/helper.c
@@ -1231,8 +1231,7 @@ void cpu_x86_inject_mce(Monitor *mon, X86CPU *cpu, int bank,
params.mcg_status = MCG_STATUS_MCIP | MCG_STATUS_RIPV;
params.addr = 0;
params.misc = 0;
- for (other_cs = first_cpu; other_cs != NULL;
- other_cs = other_cs->next_cpu) {
+ CPU_FOREACH(other_cs) {
if (other_cs == cs) {
continue;
}
diff --git a/target-i386/misc_helper.c b/target-i386/misc_helper.c
index 957926c..93933fd 100644
--- a/target-i386/misc_helper.c
+++ b/target-i386/misc_helper.c
@@ -610,7 +610,7 @@ void helper_mwait(CPUX86State *env, int next_eip_addend)
cpu = x86_env_get_cpu(env);
cs = CPU(cpu);
/* XXX: not complete but not completely erroneous */
- if (cs->cpu_index != 0 || cs->next_cpu != NULL) {
+ if (cs->cpu_index != 0 || CPU_NEXT(cs) != NULL) {
/* more than one CPU: do not sleep because another CPU may
wake this one */
} else {
diff --git a/target-mips/op_helper.c b/target-mips/op_helper.c
index b828375..8e3a6d7 100644
--- a/target-mips/op_helper.c
+++ b/target-mips/op_helper.c
@@ -1699,15 +1699,14 @@ target_ulong helper_dvpe(CPUMIPSState *env)
CPUState *other_cs = first_cpu;
target_ulong prev = env->mvp->CP0_MVPControl;
- do {
+ CPU_FOREACH(other_cs) {
MIPSCPU *other_cpu = MIPS_CPU(other_cs);
/* Turn off all VPEs except the one executing the dvpe. */
if (&other_cpu->env != env) {
other_cpu->env.mvp->CP0_MVPControl &= ~(1 << CP0MVPCo_EVP);
mips_vpe_sleep(other_cpu);
}
- other_cs = other_cs->next_cpu;
- } while (other_cs);
+ }
return prev;
}
@@ -1716,7 +1715,7 @@ target_ulong helper_evpe(CPUMIPSState *env)
CPUState *other_cs = first_cpu;
target_ulong prev = env->mvp->CP0_MVPControl;
- do {
+ CPU_FOREACH(other_cs) {
MIPSCPU *other_cpu = MIPS_CPU(other_cs);
if (&other_cpu->env != env
@@ -1726,8 +1725,7 @@ target_ulong helper_evpe(CPUMIPSState *env)
other_cpu->env.mvp->CP0_MVPControl |= (1 << CP0MVPCo_EVP);
mips_vpe_wake(other_cpu); /* And wake it up. */
}
- other_cs = other_cs->next_cpu;
- } while (other_cs);
+ }
return prev;
}
#endif /* !CONFIG_USER_ONLY */
diff --git a/target-ppc/excp_helper.c b/target-ppc/excp_helper.c
index e9fcad8..df7886a 100644
--- a/target-ppc/excp_helper.c
+++ b/target-ppc/excp_helper.c
@@ -992,7 +992,7 @@ void helper_msgsnd(target_ulong rb)
return;
}
- for (cs = first_cpu; cs != NULL; cs = cs->next_cpu) {
+ CPU_FOREACH(cs) {
PowerPCCPU *cpu = POWERPC_CPU(cs);
CPUPPCState *cenv = &cpu->env;
diff --git a/translate-all.c b/translate-all.c
index 3b5fc7c..2c923c6 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -696,7 +696,7 @@ void tb_flush(CPUArchState *env1)
}
tcg_ctx.tb_ctx.nb_tbs = 0;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
CPUArchState *env = cpu->env_ptr;
memset(env->tb_jmp_cache, 0, TB_JMP_CACHE_SIZE * sizeof(void *));
@@ -850,7 +850,7 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
/* remove the TB from the hash list */
h = tb_jmp_cache_hash_func(tb->pc);
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
CPUArchState *env = cpu->env_ptr;
if (env->tb_jmp_cache[h] == tb) {
--
1.8.1.4
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH qom-cpu for-next 1/2] cpu: Use QTAILQ for CPU list
@ 2013-07-30 16:55 ` Andreas Färber
0 siblings, 0 replies; 13+ messages in thread
From: Andreas Färber @ 2013-07-30 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Maydell, Anthony Liguori, open list:Overall, Gleb Natapov,
Riku Voipio, Alexander Graf, Luiz Capitulino, open list:e500,
Paul Brook, Scott Wood, Paolo Bonzini, Andreas Färber,
Aurelien Jarno
Introduce CPU_FOREACH(), CPU_FOREACH_SAFE() and CPU_NEXT() shorthand
macros.
Signed-off-by: Andreas Färber <afaerber@suse.de>
---
cpus.c | 47 ++++++++++++++++++++---------------------------
cputlb.c | 2 +-
dump.c | 10 +++++-----
exec.c | 26 ++++++++++----------------
gdbstub.c | 14 +++++++-------
hw/arm/boot.c | 2 +-
hw/i386/kvm/clock.c | 2 +-
hw/i386/kvmvapic.c | 2 +-
hw/i386/pc.c | 3 +--
hw/ppc/e500.c | 2 +-
hw/ppc/ppc.c | 2 +-
hw/ppc/spapr.c | 4 ++--
include/qom/cpu.h | 11 +++++++++--
kvm-all.c | 8 ++++----
linux-user/elfload.c | 2 +-
linux-user/main.c | 10 +++++++---
linux-user/syscall.c | 17 ++---------------
memory_mapping.c | 5 +++--
monitor.c | 2 +-
target-i386/helper.c | 3 +--
target-i386/misc_helper.c | 2 +-
target-mips/op_helper.c | 10 ++++------
target-ppc/excp_helper.c | 2 +-
translate-all.c | 4 ++--
24 files changed, 87 insertions(+), 105 deletions(-)
diff --git a/cpus.c b/cpus.c
index 0f65e76..1e2fd8a 100644
--- a/cpus.c
+++ b/cpus.c
@@ -81,7 +81,7 @@ static bool all_cpu_threads_idle(void)
{
CPUState *cpu;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
if (!cpu_thread_is_idle(cpu)) {
return false;
}
@@ -394,7 +394,7 @@ void hw_error(const char *fmt, ...)
fprintf(stderr, "qemu: hardware error: ");
vfprintf(stderr, fmt, ap);
fprintf(stderr, "\n");
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
fprintf(stderr, "CPU #%d:\n", cpu->cpu_index);
cpu_dump_state(cpu, stderr, fprintf, CPU_DUMP_FPU);
}
@@ -406,7 +406,7 @@ void cpu_synchronize_all_states(void)
{
CPUState *cpu;
- for (cpu = first_cpu; cpu; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
cpu_synchronize_state(cpu);
}
}
@@ -415,7 +415,7 @@ void cpu_synchronize_all_post_reset(void)
{
CPUState *cpu;
- for (cpu = first_cpu; cpu; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
cpu_synchronize_post_reset(cpu);
}
}
@@ -424,7 +424,7 @@ void cpu_synchronize_all_post_init(void)
{
CPUState *cpu;
- for (cpu = first_cpu; cpu; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
cpu_synchronize_post_init(cpu);
}
}
@@ -743,7 +743,7 @@ static void qemu_tcg_wait_io_event(void)
qemu_cond_wait(&qemu_io_proceeded_cond, &qemu_global_mutex);
}
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
qemu_wait_io_event_common(cpu);
}
}
@@ -855,11 +855,11 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
qemu_cond_signal(&qemu_cpu_cond);
/* wait for initial kick-off after machine start */
- while (first_cpu->stopped) {
+ while (QTAILQ_FIRST(&cpus)->stopped) {
qemu_cond_wait(tcg_halt_cond, &qemu_global_mutex);
/* process any pending work */
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
qemu_wait_io_event_common(cpu);
}
}
@@ -969,13 +969,12 @@ void qemu_mutex_unlock_iothread(void)
static int all_vcpus_paused(void)
{
- CPUState *cpu = first_cpu;
+ CPUState *cpu;
- while (cpu) {
+ CPU_FOREACH(cpu) {
if (!cpu->stopped) {
return 0;
}
- cpu = cpu->next_cpu;
}
return 1;
@@ -983,23 +982,20 @@ static int all_vcpus_paused(void)
void pause_all_vcpus(void)
{
- CPUState *cpu = first_cpu;
+ CPUState *cpu;
qemu_clock_enable(vm_clock, false);
- while (cpu) {
+ CPU_FOREACH(cpu) {
cpu->stop = true;
qemu_cpu_kick(cpu);
- cpu = cpu->next_cpu;
}
if (qemu_in_vcpu_thread()) {
cpu_stop_current();
if (!kvm_enabled()) {
- cpu = first_cpu;
- while (cpu) {
+ CPU_FOREACH(cpu) {
cpu->stop = false;
cpu->stopped = true;
- cpu = cpu->next_cpu;
}
return;
}
@@ -1007,10 +1003,8 @@ void pause_all_vcpus(void)
while (!all_vcpus_paused()) {
qemu_cond_wait(&qemu_pause_cond, &qemu_global_mutex);
- cpu = first_cpu;
- while (cpu) {
+ CPU_FOREACH(cpu) {
qemu_cpu_kick(cpu);
- cpu = cpu->next_cpu;
}
}
}
@@ -1024,12 +1018,11 @@ void cpu_resume(CPUState *cpu)
void resume_all_vcpus(void)
{
- CPUState *cpu = first_cpu;
+ CPUState *cpu;
qemu_clock_enable(vm_clock, true);
- while (cpu) {
+ CPU_FOREACH(cpu) {
cpu_resume(cpu);
- cpu = cpu->next_cpu;
}
}
@@ -1181,7 +1174,7 @@ static void tcg_exec_all(void)
if (next_cpu == NULL) {
next_cpu = first_cpu;
}
- for (; next_cpu != NULL && !exit_request; next_cpu = next_cpu->next_cpu) {
+ for (; next_cpu != NULL && !exit_request; next_cpu = CPU_NEXT(next_cpu)) {
CPUState *cpu = next_cpu;
CPUArchState *env = cpu->env_ptr;
@@ -1206,7 +1199,7 @@ void set_numa_modes(void)
CPUState *cpu;
int i;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
for (i = 0; i < nb_numa_nodes; i++) {
if (test_bit(cpu->cpu_index, node_cpumask[i])) {
cpu->numa_node = i;
@@ -1228,7 +1221,7 @@ CpuInfoList *qmp_query_cpus(Error **errp)
CpuInfoList *head = NULL, *cur_item = NULL;
CPUState *cpu;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
CpuInfoList *info;
#if defined(TARGET_I386)
X86CPU *x86_cpu = X86_CPU(cpu);
@@ -1357,7 +1350,7 @@ void qmp_inject_nmi(Error **errp)
#if defined(TARGET_I386)
CPUState *cs;
- for (cs = first_cpu; cs != NULL; cs = cs->next_cpu) {
+ CPU_FOREACH(cs) {
X86CPU *cpu = X86_CPU(cs);
CPUX86State *env = &cpu->env;
diff --git a/cputlb.c b/cputlb.c
index 977c0ca..19ecf60 100644
--- a/cputlb.c
+++ b/cputlb.c
@@ -189,7 +189,7 @@ void cpu_tlb_reset_dirty_all(ram_addr_t start1, ram_addr_t length)
CPUState *cpu;
CPUArchState *env;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
int mmu_idx;
env = cpu->env_ptr;
diff --git a/dump.c b/dump.c
index 6a3a72a..dac97a3 100644
--- a/dump.c
+++ b/dump.c
@@ -279,7 +279,7 @@ static int write_elf64_notes(DumpState *s)
int ret;
int id;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
id = cpu_index(cpu);
ret = cpu_write_elf64_note(fd_write_vmcore, cpu, id, s);
if (ret < 0) {
@@ -288,7 +288,7 @@ static int write_elf64_notes(DumpState *s)
}
}
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
ret = cpu_write_elf64_qemunote(fd_write_vmcore, cpu, s);
if (ret < 0) {
dump_error(s, "dump: failed to write CPU status.\n");
@@ -329,7 +329,7 @@ static int write_elf32_notes(DumpState *s)
int ret;
int id;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
id = cpu_index(cpu);
ret = cpu_write_elf32_note(fd_write_vmcore, cpu, id, s);
if (ret < 0) {
@@ -338,7 +338,7 @@ static int write_elf32_notes(DumpState *s)
}
}
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
ret = cpu_write_elf32_qemunote(fd_write_vmcore, cpu, s);
if (ret < 0) {
dump_error(s, "dump: failed to write CPU status.\n");
@@ -734,7 +734,7 @@ static int dump_init(DumpState *s, int fd, bool paging, bool has_filter,
*/
cpu_synchronize_all_states();
nr_cpus = 0;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
nr_cpus++;
}
diff --git a/exec.c b/exec.c
index c4f2894..4ee61a6 100644
--- a/exec.c
+++ b/exec.c
@@ -69,7 +69,7 @@ static MemoryRegion io_mem_unassigned;
#endif
-CPUState *first_cpu;
+struct CPUTailQ cpus = QTAILQ_HEAD_INITIALIZER(cpus);
/* current CPU in the current thread. It is only valid inside
cpu_exec() */
DEFINE_TLS(CPUState *, current_cpu);
@@ -351,26 +351,23 @@ const VMStateDescription vmstate_cpu_common = {
CPUState *qemu_get_cpu(int index)
{
- CPUState *cpu = first_cpu;
+ CPUState *cpu;
- while (cpu) {
+ CPU_FOREACH(cpu) {
if (cpu->cpu_index == index) {
- break;
+ return cpu;
}
- cpu = cpu->next_cpu;
}
- return cpu;
+ return NULL;
}
void qemu_for_each_cpu(void (*func)(CPUState *cpu, void *data), void *data)
{
CPUState *cpu;
- cpu = first_cpu;
- while (cpu) {
+ CPU_FOREACH(cpu) {
func(cpu, data);
- cpu = cpu->next_cpu;
}
}
@@ -378,17 +375,14 @@ void cpu_exec_init(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
CPUClass *cc = CPU_GET_CLASS(cpu);
- CPUState **pcpu;
+ CPUState *some_cpu;
int cpu_index;
#if defined(CONFIG_USER_ONLY)
cpu_list_lock();
#endif
- cpu->next_cpu = NULL;
- pcpu = &first_cpu;
cpu_index = 0;
- while (*pcpu != NULL) {
- pcpu = &(*pcpu)->next_cpu;
+ CPU_FOREACH(some_cpu) {
cpu_index++;
}
cpu->cpu_index = cpu_index;
@@ -398,7 +392,7 @@ void cpu_exec_init(CPUArchState *env)
#ifndef CONFIG_USER_ONLY
cpu->thread_id = qemu_get_thread_id();
#endif
- *pcpu = cpu;
+ QTAILQ_INSERT_TAIL(&cpus, cpu, node);
#if defined(CONFIG_USER_ONLY)
cpu_list_unlock();
#endif
@@ -1759,7 +1753,7 @@ static void tcg_commit(MemoryListener *listener)
/* since each CPU stores ram addresses in its TLB cache, we must
reset the modified entries */
/* XXX: slow ! */
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
CPUArchState *env = cpu->env_ptr;
tlb_flush(env, 1);
diff --git a/gdbstub.c b/gdbstub.c
index 1af25a6..ae53df0 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -646,7 +646,7 @@ static int gdb_breakpoint_insert(target_ulong addr, target_ulong len, int type)
switch (type) {
case GDB_BREAKPOINT_SW:
case GDB_BREAKPOINT_HW:
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
env = cpu->env_ptr;
err = cpu_breakpoint_insert(env, addr, BP_GDB, NULL);
if (err)
@@ -657,7 +657,7 @@ static int gdb_breakpoint_insert(target_ulong addr, target_ulong len, int type)
case GDB_WATCHPOINT_WRITE:
case GDB_WATCHPOINT_READ:
case GDB_WATCHPOINT_ACCESS:
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
env = cpu->env_ptr;
err = cpu_watchpoint_insert(env, addr, len, xlat_gdb_type[type],
NULL);
@@ -684,7 +684,7 @@ static int gdb_breakpoint_remove(target_ulong addr, target_ulong len, int type)
switch (type) {
case GDB_BREAKPOINT_SW:
case GDB_BREAKPOINT_HW:
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
env = cpu->env_ptr;
err = cpu_breakpoint_remove(env, addr, BP_GDB);
if (err)
@@ -695,7 +695,7 @@ static int gdb_breakpoint_remove(target_ulong addr, target_ulong len, int type)
case GDB_WATCHPOINT_WRITE:
case GDB_WATCHPOINT_READ:
case GDB_WATCHPOINT_ACCESS:
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
env = cpu->env_ptr;
err = cpu_watchpoint_remove(env, addr, len, xlat_gdb_type[type]);
if (err)
@@ -718,7 +718,7 @@ static void gdb_breakpoint_remove_all(void)
return;
}
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
env = cpu->env_ptr;
cpu_breakpoint_remove_all(env, BP_GDB);
#ifndef CONFIG_USER_ONLY
@@ -742,7 +742,7 @@ static CPUState *find_cpu(uint32_t thread_id)
{
CPUState *cpu;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
if (cpu_index(cpu) == thread_id) {
return cpu;
}
@@ -1068,7 +1068,7 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
if (s->query_cpu) {
snprintf(buf, sizeof(buf), "m%x", cpu_index(s->query_cpu));
put_packet(s, buf);
- s->query_cpu = s->query_cpu->next_cpu;
+ s->query_cpu = CPU_NEXT(s->query_cpu);
} else
put_packet(s, "l");
break;
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
index 2cbeefd..1e313af 100644
--- a/hw/arm/boot.c
+++ b/hw/arm/boot.c
@@ -468,7 +468,7 @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
}
info->is_linux = is_linux;
- for (; cs; cs = cs->next_cpu) {
+ for (; cs; cs = CPU_NEXT(cs)) {
cpu = ARM_CPU(cs);
cpu->env.boot_info = info;
qemu_register_reset(do_cpu_reset, cpu);
diff --git a/hw/i386/kvm/clock.c b/hw/i386/kvm/clock.c
index e89e2f7..92aabb8 100644
--- a/hw/i386/kvm/clock.c
+++ b/hw/i386/kvm/clock.c
@@ -59,7 +59,7 @@ static void kvmclock_vm_state_change(void *opaque, int running,
if (!cap_clock_ctrl) {
return;
}
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
ret = kvm_vcpu_ioctl(cpu, KVM_KVMCLOCK_CTRL, 0);
if (ret) {
if (ret != -EINVAL) {
diff --git a/hw/i386/kvmvapic.c b/hw/i386/kvmvapic.c
index a4506bc..cb860d5 100644
--- a/hw/i386/kvmvapic.c
+++ b/hw/i386/kvmvapic.c
@@ -498,7 +498,7 @@ static void vapic_enable_tpr_reporting(bool enable)
X86CPU *cpu;
CPUX86State *env;
- for (cs = first_cpu; cs != NULL; cs = cs->next_cpu) {
+ CPU_FOREACH(cs) {
cpu = X86_CPU(cs);
env = &cpu->env;
info.apic = env->apic_state;
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 2a87563..22f62c5 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -192,13 +192,12 @@ static void pic_irq_request(void *opaque, int irq, int level)
DPRINTF("pic_irqs: %s irq %d\n", level? "raise" : "lower", irq);
if (env->apic_state) {
- while (cs) {
+ CPU_FOREACH(cs) {
cpu = X86_CPU(cs);
env = &cpu->env;
if (apic_accept_pic_intr(env->apic_state)) {
apic_deliver_pic_intr(env->apic_state, level);
}
- cs = cs->next_cpu;
}
} else {
if (level) {
diff --git a/hw/ppc/e500.c b/hw/ppc/e500.c
index f00a62a..c2876fe 100644
--- a/hw/ppc/e500.c
+++ b/hw/ppc/e500.c
@@ -505,7 +505,7 @@ static DeviceState *ppce500_init_mpic_kvm(PPCE500Params *params,
return NULL;
}
- for (cs = first_cpu; cs != NULL; cs = cs->next_cpu) {
+ CPU_FOREACH(cs) {
if (kvm_openpic_connect_vcpu(dev, cs)) {
fprintf(stderr, "%s: failed to connect vcpu to irqchip\n",
__func__);
diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index e1c095c..17f47e4 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -443,7 +443,7 @@ void ppce500_set_mpic_proxy(bool enabled)
{
CPUState *cs;
- for (cs = first_cpu; cs != NULL; cs = cs->next_cpu) {
+ CPU_FOREACH(cs) {
PowerPCCPU *cpu = POWERPC_CPU(cs);
cpu->env.mpic_proxy = enabled;
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 16bfab9..84a26c5 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -167,7 +167,7 @@ static int spapr_fixup_cpu_dt(void *fdt, sPAPREnvironment *spapr)
assert(spapr->cpu_model);
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
uint32_t associativity[] = {cpu_to_be32(0x5),
cpu_to_be32(0x0),
cpu_to_be32(0x0),
@@ -331,7 +331,7 @@ static void *spapr_create_fdt_skel(const char *cpu_model,
/* This is needed during FDT finalization */
spapr->cpu_model = g_strdup(modelname);
- for (cs = first_cpu; cs != NULL; cs = cs->next_cpu) {
+ CPU_FOREACH(cs) {
PowerPCCPU *cpu = POWERPC_CPU(cs);
CPUPPCState *env = &cpu->env;
PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cs);
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 0d6e95c..6d052ee 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -23,6 +23,7 @@
#include <signal.h>
#include "hw/qdev-core.h"
#include "exec/hwaddr.h"
+#include "qemu/queue.h"
#include "qemu/thread.h"
#include "qemu/tls.h"
#include "qemu/typedefs.h"
@@ -188,7 +189,7 @@ struct CPUState {
struct TranslationBlock *current_tb;
struct GDBRegisterState *gdb_regs;
int gdb_num_regs;
- CPUState *next_cpu;
+ QTAILQ_ENTRY(CPUState) node;
int kvm_fd;
bool kvm_vcpu_dirty;
@@ -200,7 +201,13 @@ struct CPUState {
uint32_t halted; /* used by alpha, cris, ppc TCG */
};
-extern CPUState *first_cpu;
+QTAILQ_HEAD(CPUTailQ, CPUState);
+extern struct CPUTailQ cpus;
+#define CPU_NEXT(cpu) QTAILQ_NEXT(cpu, node)
+#define CPU_FOREACH(cpu) QTAILQ_FOREACH(cpu, &cpus, node)
+#define CPU_FOREACH_SAFE(cpu, next_cpu) \
+ QTAILQ_FOREACH_SAFE(cpu, &cpus, node, next_cpu)
+#define first_cpu QTAILQ_FIRST(&cpus)
DECLARE_TLS(CPUState *, current_cpu);
#define current_cpu tls_var(current_cpu)
diff --git a/kvm-all.c b/kvm-all.c
index 716860f..52df2a4 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -1933,7 +1933,7 @@ int kvm_insert_breakpoint(CPUState *cpu, target_ulong addr,
}
}
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
err = kvm_update_guest_debug(cpu, 0);
if (err) {
return err;
@@ -1973,7 +1973,7 @@ int kvm_remove_breakpoint(CPUState *cpu, target_ulong addr,
}
}
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
err = kvm_update_guest_debug(cpu, 0);
if (err) {
return err;
@@ -1990,7 +1990,7 @@ void kvm_remove_all_breakpoints(CPUState *cpu)
QTAILQ_FOREACH_SAFE(bp, &s->kvm_sw_breakpoints, entry, next) {
if (kvm_arch_remove_sw_breakpoint(cpu, bp) != 0) {
/* Try harder to find a CPU that currently sees the breakpoint. */
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
if (kvm_arch_remove_sw_breakpoint(cpu, bp) == 0) {
break;
}
@@ -2001,7 +2001,7 @@ void kvm_remove_all_breakpoints(CPUState *cpu)
}
kvm_arch_remove_all_hw_breakpoints();
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
kvm_update_guest_debug(cpu, 0);
}
}
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index 7ce2eab..72d9270 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -2668,7 +2668,7 @@ static int fill_note_info(struct elf_note_info *info,
/* read and fill status of all threads */
cpu_list_lock();
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
if (cpu == thread_cpu) {
continue;
}
diff --git a/linux-user/main.c b/linux-user/main.c
index 03859bc..5c2f7b2 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -117,10 +117,14 @@ void fork_end(int child)
{
mmap_fork_end(child);
if (child) {
+ CPUState *cpu, *next_cpu;
/* Child processes created by fork() only have a single thread.
Discard information about the parent threads. */
- first_cpu = thread_cpu;
- first_cpu->next_cpu = NULL;
+ CPU_FOREACH_SAFE(cpu, next_cpu) {
+ if (cpu != thread_cpu) {
+ QTAILQ_REMOVE(&cpus, thread_cpu, node);
+ }
+ }
pending_cpus = 0;
pthread_mutex_init(&exclusive_lock, NULL);
pthread_mutex_init(&cpu_list_mutex, NULL);
@@ -154,7 +158,7 @@ static inline void start_exclusive(void)
pending_cpus = 1;
/* Make all other cpus stop executing. */
- for (other_cpu = first_cpu; other_cpu; other_cpu = other_cpu->next_cpu) {
+ CPU_FOREACH(other_cpu) {
if (other_cpu->running) {
pending_cpus++;
cpu_exit(other_cpu);
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index 3f6db4b..0044196 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -5113,25 +5113,12 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
Do thread termination if we have more then one thread. */
/* FIXME: This probably breaks if a signal arrives. We should probably
be disabling signals. */
- if (first_cpu->next_cpu) {
+ if (CPU_NEXT(first_cpu)) {
TaskState *ts;
- CPUState **lastp;
- CPUState *p;
cpu_list_lock();
- lastp = &first_cpu;
- p = first_cpu;
- while (p && p != cpu) {
- lastp = &p->next_cpu;
- p = p->next_cpu;
- }
- /* If we didn't find the CPU for this thread then something is
- horribly wrong. */
- if (!p) {
- abort();
- }
/* Remove the CPU from the list. */
- *lastp = p->next_cpu;
+ QTAILQ_REMOVE(&cpus, cpu, node);
cpu_list_unlock();
ts = ((CPUArchState *)cpu_env)->opaque;
if (ts->child_tidptr) {
diff --git a/memory_mapping.c b/memory_mapping.c
index 515a984..a50ca53 100644
--- a/memory_mapping.c
+++ b/memory_mapping.c
@@ -169,7 +169,7 @@ static CPUState *find_paging_enabled_cpu(CPUState *start_cpu)
{
CPUState *cpu;
- for (cpu = start_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
if (cpu_paging_enabled(cpu)) {
return cpu;
}
@@ -186,7 +186,8 @@ void qemu_get_guest_memory_mapping(MemoryMappingList *list, Error **errp)
first_paging_enabled_cpu = find_paging_enabled_cpu(first_cpu);
if (first_paging_enabled_cpu) {
- for (cpu = first_paging_enabled_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ for (cpu = first_paging_enabled_cpu; cpu != NULL;
+ cpu = CPU_NEXT(cpu)) {
Error *err = NULL;
cpu_get_memory_mapping(cpu, list, &err);
if (err) {
diff --git a/monitor.c b/monitor.c
index 5dc0aa9..2ac1ce3 100644
--- a/monitor.c
+++ b/monitor.c
@@ -1819,7 +1819,7 @@ static void do_info_numa(Monitor *mon, const QDict *qdict)
monitor_printf(mon, "%d nodes\n", nb_numa_nodes);
for (i = 0; i < nb_numa_nodes; i++) {
monitor_printf(mon, "node %d cpus:", i);
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
if (cpu->numa_node == i) {
monitor_printf(mon, " %d", cpu->cpu_index);
}
diff --git a/target-i386/helper.c b/target-i386/helper.c
index bf3e2ac..7c58e27 100644
--- a/target-i386/helper.c
+++ b/target-i386/helper.c
@@ -1231,8 +1231,7 @@ void cpu_x86_inject_mce(Monitor *mon, X86CPU *cpu, int bank,
params.mcg_status = MCG_STATUS_MCIP | MCG_STATUS_RIPV;
params.addr = 0;
params.misc = 0;
- for (other_cs = first_cpu; other_cs != NULL;
- other_cs = other_cs->next_cpu) {
+ CPU_FOREACH(other_cs) {
if (other_cs == cs) {
continue;
}
diff --git a/target-i386/misc_helper.c b/target-i386/misc_helper.c
index 957926c..93933fd 100644
--- a/target-i386/misc_helper.c
+++ b/target-i386/misc_helper.c
@@ -610,7 +610,7 @@ void helper_mwait(CPUX86State *env, int next_eip_addend)
cpu = x86_env_get_cpu(env);
cs = CPU(cpu);
/* XXX: not complete but not completely erroneous */
- if (cs->cpu_index != 0 || cs->next_cpu != NULL) {
+ if (cs->cpu_index != 0 || CPU_NEXT(cs) != NULL) {
/* more than one CPU: do not sleep because another CPU may
wake this one */
} else {
diff --git a/target-mips/op_helper.c b/target-mips/op_helper.c
index b828375..8e3a6d7 100644
--- a/target-mips/op_helper.c
+++ b/target-mips/op_helper.c
@@ -1699,15 +1699,14 @@ target_ulong helper_dvpe(CPUMIPSState *env)
CPUState *other_cs = first_cpu;
target_ulong prev = env->mvp->CP0_MVPControl;
- do {
+ CPU_FOREACH(other_cs) {
MIPSCPU *other_cpu = MIPS_CPU(other_cs);
/* Turn off all VPEs except the one executing the dvpe. */
if (&other_cpu->env != env) {
other_cpu->env.mvp->CP0_MVPControl &= ~(1 << CP0MVPCo_EVP);
mips_vpe_sleep(other_cpu);
}
- other_cs = other_cs->next_cpu;
- } while (other_cs);
+ }
return prev;
}
@@ -1716,7 +1715,7 @@ target_ulong helper_evpe(CPUMIPSState *env)
CPUState *other_cs = first_cpu;
target_ulong prev = env->mvp->CP0_MVPControl;
- do {
+ CPU_FOREACH(other_cs) {
MIPSCPU *other_cpu = MIPS_CPU(other_cs);
if (&other_cpu->env != env
@@ -1726,8 +1725,7 @@ target_ulong helper_evpe(CPUMIPSState *env)
other_cpu->env.mvp->CP0_MVPControl |= (1 << CP0MVPCo_EVP);
mips_vpe_wake(other_cpu); /* And wake it up. */
}
- other_cs = other_cs->next_cpu;
- } while (other_cs);
+ }
return prev;
}
#endif /* !CONFIG_USER_ONLY */
diff --git a/target-ppc/excp_helper.c b/target-ppc/excp_helper.c
index e9fcad8..df7886a 100644
--- a/target-ppc/excp_helper.c
+++ b/target-ppc/excp_helper.c
@@ -992,7 +992,7 @@ void helper_msgsnd(target_ulong rb)
return;
}
- for (cs = first_cpu; cs != NULL; cs = cs->next_cpu) {
+ CPU_FOREACH(cs) {
PowerPCCPU *cpu = POWERPC_CPU(cs);
CPUPPCState *cenv = &cpu->env;
diff --git a/translate-all.c b/translate-all.c
index 3b5fc7c..2c923c6 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -696,7 +696,7 @@ void tb_flush(CPUArchState *env1)
}
tcg_ctx.tb_ctx.nb_tbs = 0;
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
CPUArchState *env = cpu->env_ptr;
memset(env->tb_jmp_cache, 0, TB_JMP_CACHE_SIZE * sizeof(void *));
@@ -850,7 +850,7 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
/* remove the TB from the hash list */
h = tb_jmp_cache_hash_func(tb->pc);
- for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {
+ CPU_FOREACH(cpu) {
CPUArchState *env = cpu->env_ptr;
if (env->tb_jmp_cache[h] == tb) {
--
1.8.1.4
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [Qemu-devel] [RFC qom-cpu for-next 2/2] cpu: Replace qemu_for_each_cpu()
2013-07-30 16:55 [Qemu-devel] [PATCH qom-cpu for-next 0/2] QOM CPUState, part 12: CPU loops, revisited Andreas Färber
2013-07-30 16:55 ` [Qemu-devel] " Andreas Färber
@ 2013-07-30 16:55 ` Andreas Färber
2013-07-30 18:30 ` Michael S. Tsirkin
2013-08-14 11:49 ` [Qemu-devel] [PATCH qom-cpu for-next 0/2] QOM CPUState, part 12: CPU loops, revisited Andreas Färber
2 siblings, 1 reply; 13+ messages in thread
From: Andreas Färber @ 2013-07-30 16:55 UTC (permalink / raw)
To: qemu-devel; +Cc: Andreas Färber, Michael S. Tsirkin
It was introduced to loop over CPUs from target-independent code, but
since commit 182735efaf956ccab50b6d74a4fed163e0f35660 target-independent
CPUState is used.
A loop can be considered more efficient than function calls in a loop,
and CPU_FOREACH() hides implementation details just as well, so use that
instead.
Suggested-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
---
arch_init.c | 11 +++++------
cpus.c | 11 ++++-------
exec.c | 9 ---------
hw/acpi/piix4.c | 20 +++++++++-----------
include/qom/cpu.h | 9 ---------
qom/cpu.c | 30 +++++++++---------------------
6 files changed, 27 insertions(+), 63 deletions(-)
diff --git a/arch_init.c b/arch_init.c
index 68a7ab7..148f76e 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -1195,15 +1195,14 @@ static void mig_sleep_cpu(void *opq)
much time in the VM. The migration thread will try to catchup.
Workload will experience a performance drop.
*/
-static void mig_throttle_cpu_down(CPUState *cpu, void *data)
-{
- async_run_on_cpu(cpu, mig_sleep_cpu, NULL);
-}
-
static void mig_throttle_guest_down(void)
{
+ CPUState *cpu;
+
qemu_mutex_lock_iothread();
- qemu_for_each_cpu(mig_throttle_cpu_down, NULL);
+ CPU_FOREACH(cpu) {
+ async_run_on_cpu(cpu, mig_sleep_cpu, NULL);
+ }
qemu_mutex_unlock_iothread();
}
diff --git a/cpus.c b/cpus.c
index 1e2fd8a..c1d0f18 100644
--- a/cpus.c
+++ b/cpus.c
@@ -837,12 +837,6 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
static void tcg_exec_all(void);
-static void tcg_signal_cpu_creation(CPUState *cpu, void *data)
-{
- cpu->thread_id = qemu_get_thread_id();
- cpu->created = true;
-}
-
static void *qemu_tcg_cpu_thread_fn(void *arg)
{
CPUState *cpu = arg;
@@ -851,7 +845,10 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
qemu_thread_get_self(cpu->thread);
qemu_mutex_lock(&qemu_global_mutex);
- qemu_for_each_cpu(tcg_signal_cpu_creation, NULL);
+ CPU_FOREACH(cpu) {
+ cpu->thread_id = qemu_get_thread_id();
+ cpu->created = true;
+ }
qemu_cond_signal(&qemu_cpu_cond);
/* wait for initial kick-off after machine start */
diff --git a/exec.c b/exec.c
index 4ee61a6..7d90119 100644
--- a/exec.c
+++ b/exec.c
@@ -362,15 +362,6 @@ CPUState *qemu_get_cpu(int index)
return NULL;
}
-void qemu_for_each_cpu(void (*func)(CPUState *cpu, void *data), void *data)
-{
- CPUState *cpu;
-
- CPU_FOREACH(cpu) {
- func(cpu, data);
- }
-}
-
void cpu_exec_init(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
diff --git a/hw/acpi/piix4.c b/hw/acpi/piix4.c
index c885690..12e40b0 100644
--- a/hw/acpi/piix4.c
+++ b/hw/acpi/piix4.c
@@ -667,22 +667,14 @@ static void piix4_cpu_added_req(Notifier *n, void *opaque)
piix4_cpu_hotplug_req(s, CPU(opaque), PLUG);
}
-static void piix4_init_cpu_status(CPUState *cpu, void *data)
-{
- CPUStatus *g = (CPUStatus *)data;
- CPUClass *k = CPU_GET_CLASS(cpu);
- int64_t id = k->get_arch_id(cpu);
-
- g_assert((id / 8) < PIIX4_PROC_LEN);
- g->sts[id / 8] |= (1 << (id % 8));
-}
-
static int piix4_device_hotplug(DeviceState *qdev, PCIDevice *dev,
PCIHotplugState state);
static void piix4_acpi_system_hot_add_init(MemoryRegion *parent,
PCIBus *bus, PIIX4PMState *s)
{
+ CPUState *cpu;
+
memory_region_init_io(&s->io_gpe, OBJECT(s), &piix4_gpe_ops, s,
"acpi-gpe0", GPE_LEN);
memory_region_add_subregion(parent, GPE_BASE, &s->io_gpe);
@@ -693,7 +685,13 @@ static void piix4_acpi_system_hot_add_init(MemoryRegion *parent,
&s->io_pci);
pci_bus_hotplug(bus, piix4_device_hotplug, DEVICE(s));
- qemu_for_each_cpu(piix4_init_cpu_status, &s->gpe_cpu);
+ CPU_FOREACH(cpu) {
+ CPUClass *cc = CPU_GET_CLASS(cpu);
+ int64_t id = cc->get_arch_id(cpu);
+
+ g_assert((id / 8) < PIIX4_PROC_LEN);
+ s->gpe_cpu.sts[id / 8] |= (1 << (id % 8));
+ }
memory_region_init_io(&s->io_cpu, OBJECT(s), &cpu_hotplug_ops, s,
"acpi-cpu-hotplug", PIIX4_PROC_LEN);
memory_region_add_subregion(parent, PIIX4_PROC_BASE, &s->io_cpu);
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 6d052ee..3cc28b5 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -401,15 +401,6 @@ void run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data);
void async_run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data);
/**
- * qemu_for_each_cpu:
- * @func: The function to be executed.
- * @data: Data to pass to the function.
- *
- * Executes @func for each CPU.
- */
-void qemu_for_each_cpu(void (*func)(CPUState *cpu, void *data), void *data);
-
-/**
* qemu_get_cpu:
* @index: The CPUState@cpu_index value of the CPU to obtain.
*
diff --git a/qom/cpu.c b/qom/cpu.c
index aa95108..ecade59 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -25,30 +25,18 @@
#include "qemu/log.h"
#include "sysemu/sysemu.h"
-typedef struct CPUExistsArgs {
- int64_t id;
- bool found;
-} CPUExistsArgs;
-
-static void cpu_exist_cb(CPUState *cpu, void *data)
-{
- CPUClass *klass = CPU_GET_CLASS(cpu);
- CPUExistsArgs *arg = data;
-
- if (klass->get_arch_id(cpu) == arg->id) {
- arg->found = true;
- }
-}
-
bool cpu_exists(int64_t id)
{
- CPUExistsArgs data = {
- .id = id,
- .found = false,
- };
+ CPUState *cpu;
+
+ CPU_FOREACH(cpu) {
+ CPUClass *cc = CPU_GET_CLASS(cpu);
- qemu_for_each_cpu(cpu_exist_cb, &data);
- return data.found;
+ if (cc->get_arch_id(cpu) == id) {
+ return true;
+ }
+ }
+ return false;
}
bool cpu_paging_enabled(const CPUState *cpu)
--
1.8.1.4
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [RFC qom-cpu for-next 2/2] cpu: Replace qemu_for_each_cpu()
2013-07-30 16:55 ` [Qemu-devel] [RFC qom-cpu for-next 2/2] cpu: Replace qemu_for_each_cpu() Andreas Färber
@ 2013-07-30 18:30 ` Michael S. Tsirkin
0 siblings, 0 replies; 13+ messages in thread
From: Michael S. Tsirkin @ 2013-07-30 18:30 UTC (permalink / raw)
To: Andreas Färber; +Cc: qemu-devel
On Tue, Jul 30, 2013 at 06:55:59PM +0200, Andreas Färber wrote:
> It was introduced to loop over CPUs from target-independent code, but
> since commit 182735efaf956ccab50b6d74a4fed163e0f35660 target-independent
> CPUState is used.
>
> A loop can be considered more efficient than function calls in a loop,
> and CPU_FOREACH() hides implementation details just as well, so use that
> instead.
>
> Suggested-by: Markus Armbruster <armbru@redhat.com>
> Signed-off-by: Andreas Färber <afaerber@suse.de>
Efficiency is unlikely to be stellar seeing as there
are tons of string matching going on in QOM.
But I don't think it matters in any of this code, so
Acked-by: Michael S. Tsirkin <mst@redhat.com>
> ---
> arch_init.c | 11 +++++------
> cpus.c | 11 ++++-------
> exec.c | 9 ---------
> hw/acpi/piix4.c | 20 +++++++++-----------
> include/qom/cpu.h | 9 ---------
> qom/cpu.c | 30 +++++++++---------------------
> 6 files changed, 27 insertions(+), 63 deletions(-)
>
> diff --git a/arch_init.c b/arch_init.c
> index 68a7ab7..148f76e 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -1195,15 +1195,14 @@ static void mig_sleep_cpu(void *opq)
> much time in the VM. The migration thread will try to catchup.
> Workload will experience a performance drop.
> */
> -static void mig_throttle_cpu_down(CPUState *cpu, void *data)
> -{
> - async_run_on_cpu(cpu, mig_sleep_cpu, NULL);
> -}
> -
> static void mig_throttle_guest_down(void)
> {
> + CPUState *cpu;
> +
> qemu_mutex_lock_iothread();
> - qemu_for_each_cpu(mig_throttle_cpu_down, NULL);
> + CPU_FOREACH(cpu) {
> + async_run_on_cpu(cpu, mig_sleep_cpu, NULL);
> + }
> qemu_mutex_unlock_iothread();
> }
>
> diff --git a/cpus.c b/cpus.c
> index 1e2fd8a..c1d0f18 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -837,12 +837,6 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
>
> static void tcg_exec_all(void);
>
> -static void tcg_signal_cpu_creation(CPUState *cpu, void *data)
> -{
> - cpu->thread_id = qemu_get_thread_id();
> - cpu->created = true;
> -}
> -
> static void *qemu_tcg_cpu_thread_fn(void *arg)
> {
> CPUState *cpu = arg;
> @@ -851,7 +845,10 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
> qemu_thread_get_self(cpu->thread);
>
> qemu_mutex_lock(&qemu_global_mutex);
> - qemu_for_each_cpu(tcg_signal_cpu_creation, NULL);
> + CPU_FOREACH(cpu) {
> + cpu->thread_id = qemu_get_thread_id();
> + cpu->created = true;
> + }
> qemu_cond_signal(&qemu_cpu_cond);
>
> /* wait for initial kick-off after machine start */
> diff --git a/exec.c b/exec.c
> index 4ee61a6..7d90119 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -362,15 +362,6 @@ CPUState *qemu_get_cpu(int index)
> return NULL;
> }
>
> -void qemu_for_each_cpu(void (*func)(CPUState *cpu, void *data), void *data)
> -{
> - CPUState *cpu;
> -
> - CPU_FOREACH(cpu) {
> - func(cpu, data);
> - }
> -}
> -
> void cpu_exec_init(CPUArchState *env)
> {
> CPUState *cpu = ENV_GET_CPU(env);
> diff --git a/hw/acpi/piix4.c b/hw/acpi/piix4.c
> index c885690..12e40b0 100644
> --- a/hw/acpi/piix4.c
> +++ b/hw/acpi/piix4.c
> @@ -667,22 +667,14 @@ static void piix4_cpu_added_req(Notifier *n, void *opaque)
> piix4_cpu_hotplug_req(s, CPU(opaque), PLUG);
> }
>
> -static void piix4_init_cpu_status(CPUState *cpu, void *data)
> -{
> - CPUStatus *g = (CPUStatus *)data;
> - CPUClass *k = CPU_GET_CLASS(cpu);
> - int64_t id = k->get_arch_id(cpu);
> -
> - g_assert((id / 8) < PIIX4_PROC_LEN);
> - g->sts[id / 8] |= (1 << (id % 8));
> -}
> -
> static int piix4_device_hotplug(DeviceState *qdev, PCIDevice *dev,
> PCIHotplugState state);
>
> static void piix4_acpi_system_hot_add_init(MemoryRegion *parent,
> PCIBus *bus, PIIX4PMState *s)
> {
> + CPUState *cpu;
> +
> memory_region_init_io(&s->io_gpe, OBJECT(s), &piix4_gpe_ops, s,
> "acpi-gpe0", GPE_LEN);
> memory_region_add_subregion(parent, GPE_BASE, &s->io_gpe);
> @@ -693,7 +685,13 @@ static void piix4_acpi_system_hot_add_init(MemoryRegion *parent,
> &s->io_pci);
> pci_bus_hotplug(bus, piix4_device_hotplug, DEVICE(s));
>
> - qemu_for_each_cpu(piix4_init_cpu_status, &s->gpe_cpu);
> + CPU_FOREACH(cpu) {
> + CPUClass *cc = CPU_GET_CLASS(cpu);
> + int64_t id = cc->get_arch_id(cpu);
> +
> + g_assert((id / 8) < PIIX4_PROC_LEN);
> + s->gpe_cpu.sts[id / 8] |= (1 << (id % 8));
> + }
> memory_region_init_io(&s->io_cpu, OBJECT(s), &cpu_hotplug_ops, s,
> "acpi-cpu-hotplug", PIIX4_PROC_LEN);
> memory_region_add_subregion(parent, PIIX4_PROC_BASE, &s->io_cpu);
> diff --git a/include/qom/cpu.h b/include/qom/cpu.h
> index 6d052ee..3cc28b5 100644
> --- a/include/qom/cpu.h
> +++ b/include/qom/cpu.h
> @@ -401,15 +401,6 @@ void run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data);
> void async_run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data);
>
> /**
> - * qemu_for_each_cpu:
> - * @func: The function to be executed.
> - * @data: Data to pass to the function.
> - *
> - * Executes @func for each CPU.
> - */
> -void qemu_for_each_cpu(void (*func)(CPUState *cpu, void *data), void *data);
> -
> -/**
> * qemu_get_cpu:
> * @index: The CPUState@cpu_index value of the CPU to obtain.
> *
> diff --git a/qom/cpu.c b/qom/cpu.c
> index aa95108..ecade59 100644
> --- a/qom/cpu.c
> +++ b/qom/cpu.c
> @@ -25,30 +25,18 @@
> #include "qemu/log.h"
> #include "sysemu/sysemu.h"
>
> -typedef struct CPUExistsArgs {
> - int64_t id;
> - bool found;
> -} CPUExistsArgs;
> -
> -static void cpu_exist_cb(CPUState *cpu, void *data)
> -{
> - CPUClass *klass = CPU_GET_CLASS(cpu);
> - CPUExistsArgs *arg = data;
> -
> - if (klass->get_arch_id(cpu) == arg->id) {
> - arg->found = true;
> - }
> -}
> -
> bool cpu_exists(int64_t id)
> {
> - CPUExistsArgs data = {
> - .id = id,
> - .found = false,
> - };
> + CPUState *cpu;
> +
> + CPU_FOREACH(cpu) {
> + CPUClass *cc = CPU_GET_CLASS(cpu);
>
> - qemu_for_each_cpu(cpu_exist_cb, &data);
> - return data.found;
> + if (cc->get_arch_id(cpu) == id) {
> + return true;
> + }
> + }
> + return false;
> }
>
> bool cpu_paging_enabled(const CPUState *cpu)
> --
> 1.8.1.4
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH qom-cpu for-next 0/2] QOM CPUState, part 12: CPU loops, revisited
2013-07-30 16:55 [Qemu-devel] [PATCH qom-cpu for-next 0/2] QOM CPUState, part 12: CPU loops, revisited Andreas Färber
2013-07-30 16:55 ` [Qemu-devel] " Andreas Färber
2013-07-30 16:55 ` [Qemu-devel] [RFC qom-cpu for-next 2/2] cpu: Replace qemu_for_each_cpu() Andreas Färber
@ 2013-08-14 11:49 ` Andreas Färber
2013-08-30 14:42 ` Andreas Färber
2 siblings, 1 reply; 13+ messages in thread
From: Andreas Färber @ 2013-08-14 11:49 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Maydell, Eduardo Habkost, Michael S. Tsirkin, Riku Voipio,
Markus Armbruster, Alexander Graf, Blue Swirl, Anthony Liguori,
Paolo Bonzini, Igor Mammedov, Aurélien Jarno
Am 30.07.2013 18:55, schrieb Andreas Färber:
> Hello everyone,
>
> A short CPUState series for a change: This mini-series replaces our home-grown
> CPU list (first_cpu global and CPUState::next_cpu) with a QTAILQ.
>
> To avoid repeated QTAILQ_FOREACH(foo, cpus, node) I am proposing a wrapper
> CPU_FOREACH(foo) to hide the implementation details, as requested by mst,
> while keeping inline loops, as requested by Markus.
>
> The bulk of patch 1 is mechanical for/while loop conversions, except for:
> exec.c: QTAILQ_INSERT_TAIL()
> linux-user/syscall.c: QTAILQ_REMOVE() -- untested!
Ping! The 1.7 merge window is approaching and Michael has ack'ed
adopting the macros introduced in patch 1/2, but that patch has no
review or test reports yet. Riku? Peter? Alex?
Latest rebased version on qom-cpu-12 branch.
Regards,
Andreas
> Having such a stable API will help with vCPU hot-unplug.
>
> Patch 2 is controversial: It drops qemu_for_each_cpu() on Markus' request.
> I'm including it as RFC to showcase what exactly those suggestions involve.
> Converting from for (cpu = ...) to CPU_FOREACH(cpu) is obviously much more
> straightforward than converting between function and loop, therefore with
> patch 1 in mind, it seems best to still use for loops, which since recently
> no longer require CPUArchState, the original trigger for qemu_for_each_cpu().
>
> Available for testing at:
> git://github.com/afaerber/qemu-cpu.git qom-cpu-12.v1
> https://github.com/afaerber/qemu-cpu/commits/qom-cpu-12.v1
>
> Regards,
> Andreas
>
> From CPUState part 10 v1:
> * Instead of replacing every loop with qemu_for_each_cpu()
> replace loops with CPU_FOREACH() macro, based on QTAILQ_FOREACH().
>
> Cc: Markus Armbruster <armbru@redhat.com>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Anthony Liguori <anthony@codemonkey.ws>
> Cc: Blue Swirl <blauwirbel@gmail.com>
> Cc: Aurélien Jarno <aurelien@aurel32.net>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Igor Mammedov <imammedo@redhat.com>
> Cc: Riku Voipio <riku.voipio@iki.fi>
>
> Andreas Färber (2):
> cpu: Use QTAILQ for CPU list
> cpu: Replace qemu_for_each_cpu()
>
> arch_init.c | 11 ++++-----
> cpus.c | 58 ++++++++++++++++++++---------------------------
> cputlb.c | 2 +-
> dump.c | 10 ++++----
> exec.c | 33 ++++++++-------------------
> gdbstub.c | 14 ++++++------
> hw/acpi/piix4.c | 20 ++++++++--------
> hw/arm/boot.c | 2 +-
> hw/i386/kvm/clock.c | 2 +-
> hw/i386/kvmvapic.c | 2 +-
> hw/i386/pc.c | 3 +--
> hw/ppc/e500.c | 2 +-
> hw/ppc/ppc.c | 2 +-
> hw/ppc/spapr.c | 4 ++--
> include/qom/cpu.h | 20 ++++++++--------
> kvm-all.c | 8 +++----
> linux-user/elfload.c | 2 +-
> linux-user/main.c | 10 +++++---
> linux-user/syscall.c | 17 ++------------
> memory_mapping.c | 5 ++--
> monitor.c | 2 +-
> qom/cpu.c | 30 ++++++++----------------
> target-i386/helper.c | 3 +--
> target-i386/misc_helper.c | 2 +-
> target-mips/op_helper.c | 10 ++++----
> target-ppc/excp_helper.c | 2 +-
> translate-all.c | 4 ++--
> 27 files changed, 113 insertions(+), 167 deletions(-)
>
--
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH qom-cpu for-next 1/2] cpu: Use QTAILQ for CPU list
2013-07-30 16:55 ` [Qemu-devel] " Andreas Färber
@ 2013-08-21 14:12 ` Andreas Färber
-1 siblings, 0 replies; 13+ messages in thread
From: Andreas Färber @ 2013-08-21 14:12 UTC (permalink / raw)
To: Peter Maydell
Cc: qemu-devel, Anthony Liguori, kvm, Gleb Natapov, Riku Voipio,
Alexander Graf, Luiz Capitulino, qemu-ppc, Paul Brook,
Scott Wood, Paolo Bonzini, Aurelien Jarno
Am 30.07.2013 18:55, schrieb Andreas Färber:
> Introduce CPU_FOREACH(), CPU_FOREACH_SAFE() and CPU_NEXT() shorthand
> macros.
>
> Signed-off-by: Andreas Färber <afaerber@suse.de>
Needs the following addition now:
diff --git a/hw/cpu/a15mpcore.c b/hw/cpu/a15mpcore.c
index af182da..9d0e27e 100644
--- a/hw/cpu/a15mpcore.c
+++ b/hw/cpu/a15mpcore.c
@@ -72,9 +72,15 @@ static int a15mp_priv_init(SysBusDevice *dev)
/* Wire the outputs from each CPU's generic timer to the
* appropriate GIC PPI inputs
*/
- for (i = 0, cpu = first_cpu; i < s->num_cpu; i++, cpu =
cpu->next_cpu) {
+ i = 0;
+ CPU_FOREACH(cpu) {
DeviceState *cpudev = DEVICE(cpu);
int ppibase = s->num_irq - 32 + i * 32;
+
+ if (i < s->num_cpu) {
+ break;
+ }
+
/* physical timer; we wire it up to the non-secure timer's ID,
* since a real A15 always has TrustZone but QEMU doesn't.
*/
@@ -83,6 +89,7 @@ static int a15mp_priv_init(SysBusDevice *dev)
/* virtual timer */
qdev_connect_gpio_out(cpudev, 1,
qdev_get_gpio_in(s->gic, ppibase + 27));
+ i++;
}
/* Memory map (addresses are offsets from PERIPHBASE):
--
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH qom-cpu for-next 1/2] cpu: Use QTAILQ for CPU list
@ 2013-08-21 14:12 ` Andreas Färber
0 siblings, 0 replies; 13+ messages in thread
From: Andreas Färber @ 2013-08-21 14:12 UTC (permalink / raw)
To: Peter Maydell
Cc: Anthony Liguori, Gleb Natapov, kvm, Riku Voipio, Alexander Graf,
qemu-devel, qemu-ppc, Paul Brook, Scott Wood, Paolo Bonzini,
Luiz Capitulino, Aurelien Jarno
Am 30.07.2013 18:55, schrieb Andreas Färber:
> Introduce CPU_FOREACH(), CPU_FOREACH_SAFE() and CPU_NEXT() shorthand
> macros.
>
> Signed-off-by: Andreas Färber <afaerber@suse.de>
Needs the following addition now:
diff --git a/hw/cpu/a15mpcore.c b/hw/cpu/a15mpcore.c
index af182da..9d0e27e 100644
--- a/hw/cpu/a15mpcore.c
+++ b/hw/cpu/a15mpcore.c
@@ -72,9 +72,15 @@ static int a15mp_priv_init(SysBusDevice *dev)
/* Wire the outputs from each CPU's generic timer to the
* appropriate GIC PPI inputs
*/
- for (i = 0, cpu = first_cpu; i < s->num_cpu; i++, cpu =
cpu->next_cpu) {
+ i = 0;
+ CPU_FOREACH(cpu) {
DeviceState *cpudev = DEVICE(cpu);
int ppibase = s->num_irq - 32 + i * 32;
+
+ if (i < s->num_cpu) {
+ break;
+ }
+
/* physical timer; we wire it up to the non-secure timer's ID,
* since a real A15 always has TrustZone but QEMU doesn't.
*/
@@ -83,6 +89,7 @@ static int a15mp_priv_init(SysBusDevice *dev)
/* virtual timer */
qdev_connect_gpio_out(cpudev, 1,
qdev_get_gpio_in(s->gic, ppibase + 27));
+ i++;
}
/* Memory map (addresses are offsets from PERIPHBASE):
--
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH qom-cpu for-next 1/2] cpu: Use QTAILQ for CPU list
2013-08-21 14:12 ` Andreas Färber
@ 2013-08-21 14:36 ` Peter Maydell
-1 siblings, 0 replies; 13+ messages in thread
From: Peter Maydell @ 2013-08-21 14:36 UTC (permalink / raw)
To: Andreas Färber
Cc: QEMU Developers, Anthony Liguori, kvm, Gleb Natapov, Riku Voipio,
Alexander Graf, Luiz Capitulino, qemu-ppc, Paul Brook,
Scott Wood, Paolo Bonzini, Aurelien Jarno
On 21 August 2013 15:12, Andreas Färber <afaerber@suse.de> wrote:
> - for (i = 0, cpu = first_cpu; i < s->num_cpu; i++, cpu =
> cpu->next_cpu) {
> + i = 0;
> + CPU_FOREACH(cpu) {
> DeviceState *cpudev = DEVICE(cpu);
> int ppibase = s->num_irq - 32 + i * 32;
> +
> + if (i < s->num_cpu) {
> + break;
> + }
> +
> /* physical timer; we wire it up to the non-secure timer's ID,
> * since a real A15 always has TrustZone but QEMU doesn't.
> */
> @@ -83,6 +89,7 @@ static int a15mp_priv_init(SysBusDevice *dev)
> /* virtual timer */
> qdev_connect_gpio_out(cpudev, 1,
> qdev_get_gpio_in(s->gic, ppibase + 27));
> + i++;
> }
It seems a bit ugly to have to both enumerate the CPUs
via CPU_FOREACH and update an index i simultaneously.
Isn't there any way to either say "give me the CPU pointer for
CPU i" or "give me the index i of this CPU" ?
(there are a few bits post-arm_pic-removal that could be
a little cleaner with one or the other of the above.)
thanks
-- PMM
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH qom-cpu for-next 1/2] cpu: Use QTAILQ for CPU list
@ 2013-08-21 14:36 ` Peter Maydell
0 siblings, 0 replies; 13+ messages in thread
From: Peter Maydell @ 2013-08-21 14:36 UTC (permalink / raw)
To: Andreas Färber
Cc: Anthony Liguori, Gleb Natapov, kvm, Riku Voipio, Alexander Graf,
QEMU Developers, qemu-ppc, Paul Brook, Scott Wood, Paolo Bonzini,
Luiz Capitulino, Aurelien Jarno
On 21 August 2013 15:12, Andreas Färber <afaerber@suse.de> wrote:
> - for (i = 0, cpu = first_cpu; i < s->num_cpu; i++, cpu =
> cpu->next_cpu) {
> + i = 0;
> + CPU_FOREACH(cpu) {
> DeviceState *cpudev = DEVICE(cpu);
> int ppibase = s->num_irq - 32 + i * 32;
> +
> + if (i < s->num_cpu) {
> + break;
> + }
> +
> /* physical timer; we wire it up to the non-secure timer's ID,
> * since a real A15 always has TrustZone but QEMU doesn't.
> */
> @@ -83,6 +89,7 @@ static int a15mp_priv_init(SysBusDevice *dev)
> /* virtual timer */
> qdev_connect_gpio_out(cpudev, 1,
> qdev_get_gpio_in(s->gic, ppibase + 27));
> + i++;
> }
It seems a bit ugly to have to both enumerate the CPUs
via CPU_FOREACH and update an index i simultaneously.
Isn't there any way to either say "give me the CPU pointer for
CPU i" or "give me the index i of this CPU" ?
(there are a few bits post-arm_pic-removal that could be
a little cleaner with one or the other of the above.)
thanks
-- PMM
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH qom-cpu for-next 1/2] cpu: Use QTAILQ for CPU list
2013-08-21 14:36 ` Peter Maydell
@ 2013-08-21 16:32 ` Andreas Färber
-1 siblings, 0 replies; 13+ messages in thread
From: Andreas Färber @ 2013-08-21 16:32 UTC (permalink / raw)
To: Peter Maydell
Cc: QEMU Developers, Anthony Liguori, kvm, Gleb Natapov, Riku Voipio,
Alexander Graf, Luiz Capitulino, qemu-ppc, Paul Brook,
Scott Wood, Paolo Bonzini, Aurelien Jarno
Am 21.08.2013 16:36, schrieb Peter Maydell:
> On 21 August 2013 15:12, Andreas Färber <afaerber@suse.de> wrote:
>
>> - for (i = 0, cpu = first_cpu; i < s->num_cpu; i++, cpu =
>> cpu->next_cpu) {
>> + i = 0;
>> + CPU_FOREACH(cpu) {
>> DeviceState *cpudev = DEVICE(cpu);
>> int ppibase = s->num_irq - 32 + i * 32;
>> +
>> + if (i < s->num_cpu) {
>> + break;
>> + }
>> +
>> /* physical timer; we wire it up to the non-secure timer's ID,
>> * since a real A15 always has TrustZone but QEMU doesn't.
>> */
>> @@ -83,6 +89,7 @@ static int a15mp_priv_init(SysBusDevice *dev)
>> /* virtual timer */
>> qdev_connect_gpio_out(cpudev, 1,
>> qdev_get_gpio_in(s->gic, ppibase + 27));
>> + i++;
>> }
>
> It seems a bit ugly to have to both enumerate the CPUs
> via CPU_FOREACH and update an index i simultaneously.
Same for the original code. :)
> Isn't there any way to either say "give me the CPU pointer for
> CPU i" or "give me the index i of this CPU" ?
There is:
diff --git a/hw/cpu/a15mpcore.c b/hw/cpu/a15mpcore.c
index 9d0e27e..1263b12 100644
--- a/hw/cpu/a15mpcore.c
+++ b/hw/cpu/a15mpcore.c
@@ -50,7 +50,6 @@ static int a15mp_priv_init(SysBusDevice *dev)
SysBusDevice *busdev;
const char *gictype = "arm_gic";
int i;
- CPUState *cpu;
if (kvm_irqchip_in_kernel()) {
gictype = "kvm-arm-gic";
@@ -72,15 +71,10 @@ static int a15mp_priv_init(SysBusDevice *dev)
/* Wire the outputs from each CPU's generic timer to the
* appropriate GIC PPI inputs
*/
- i = 0;
- CPU_FOREACH(cpu) {
- DeviceState *cpudev = DEVICE(cpu);
+ for (i = 0; i < s->num_cpu; i++) {
+ DeviceState *cpudev = DEVICE(qemu_get_cpu(i));
int ppibase = s->num_irq - 32 + i * 32;
- if (i < s->num_cpu) {
- break;
- }
-
/* physical timer; we wire it up to the non-secure timer's ID,
* since a real A15 always has TrustZone but QEMU doesn't.
*/
@@ -89,7 +83,6 @@ static int a15mp_priv_init(SysBusDevice *dev)
/* virtual timer */
qdev_connect_gpio_out(cpudev, 1,
qdev_get_gpio_in(s->gic, ppibase + 27));
- i++;
}
/* Memory map (addresses are offsets from PERIPHBASE):
--
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH qom-cpu for-next 1/2] cpu: Use QTAILQ for CPU list
@ 2013-08-21 16:32 ` Andreas Färber
0 siblings, 0 replies; 13+ messages in thread
From: Andreas Färber @ 2013-08-21 16:32 UTC (permalink / raw)
To: Peter Maydell
Cc: Anthony Liguori, Gleb Natapov, kvm, Riku Voipio, Alexander Graf,
QEMU Developers, qemu-ppc, Paul Brook, Scott Wood, Paolo Bonzini,
Luiz Capitulino, Aurelien Jarno
Am 21.08.2013 16:36, schrieb Peter Maydell:
> On 21 August 2013 15:12, Andreas Färber <afaerber@suse.de> wrote:
>
>> - for (i = 0, cpu = first_cpu; i < s->num_cpu; i++, cpu =
>> cpu->next_cpu) {
>> + i = 0;
>> + CPU_FOREACH(cpu) {
>> DeviceState *cpudev = DEVICE(cpu);
>> int ppibase = s->num_irq - 32 + i * 32;
>> +
>> + if (i < s->num_cpu) {
>> + break;
>> + }
>> +
>> /* physical timer; we wire it up to the non-secure timer's ID,
>> * since a real A15 always has TrustZone but QEMU doesn't.
>> */
>> @@ -83,6 +89,7 @@ static int a15mp_priv_init(SysBusDevice *dev)
>> /* virtual timer */
>> qdev_connect_gpio_out(cpudev, 1,
>> qdev_get_gpio_in(s->gic, ppibase + 27));
>> + i++;
>> }
>
> It seems a bit ugly to have to both enumerate the CPUs
> via CPU_FOREACH and update an index i simultaneously.
Same for the original code. :)
> Isn't there any way to either say "give me the CPU pointer for
> CPU i" or "give me the index i of this CPU" ?
There is:
diff --git a/hw/cpu/a15mpcore.c b/hw/cpu/a15mpcore.c
index 9d0e27e..1263b12 100644
--- a/hw/cpu/a15mpcore.c
+++ b/hw/cpu/a15mpcore.c
@@ -50,7 +50,6 @@ static int a15mp_priv_init(SysBusDevice *dev)
SysBusDevice *busdev;
const char *gictype = "arm_gic";
int i;
- CPUState *cpu;
if (kvm_irqchip_in_kernel()) {
gictype = "kvm-arm-gic";
@@ -72,15 +71,10 @@ static int a15mp_priv_init(SysBusDevice *dev)
/* Wire the outputs from each CPU's generic timer to the
* appropriate GIC PPI inputs
*/
- i = 0;
- CPU_FOREACH(cpu) {
- DeviceState *cpudev = DEVICE(cpu);
+ for (i = 0; i < s->num_cpu; i++) {
+ DeviceState *cpudev = DEVICE(qemu_get_cpu(i));
int ppibase = s->num_irq - 32 + i * 32;
- if (i < s->num_cpu) {
- break;
- }
-
/* physical timer; we wire it up to the non-secure timer's ID,
* since a real A15 always has TrustZone but QEMU doesn't.
*/
@@ -89,7 +83,6 @@ static int a15mp_priv_init(SysBusDevice *dev)
/* virtual timer */
qdev_connect_gpio_out(cpudev, 1,
qdev_get_gpio_in(s->gic, ppibase + 27));
- i++;
}
/* Memory map (addresses are offsets from PERIPHBASE):
--
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH qom-cpu for-next 0/2] QOM CPUState, part 12: CPU loops, revisited
2013-08-14 11:49 ` [Qemu-devel] [PATCH qom-cpu for-next 0/2] QOM CPUState, part 12: CPU loops, revisited Andreas Färber
@ 2013-08-30 14:42 ` Andreas Färber
0 siblings, 0 replies; 13+ messages in thread
From: Andreas Färber @ 2013-08-30 14:42 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Maydell, Eduardo Habkost, Michael S. Tsirkin, Riku Voipio,
Alexander Graf, Markus Armbruster, Blue Swirl, Anthony Liguori,
Igor Mammedov, Paolo Bonzini, Aurélien Jarno
Am 14.08.2013 13:49, schrieb Andreas Färber:
> Am 30.07.2013 18:55, schrieb Andreas Färber:
>> Hello everyone,
>>
>> A short CPUState series for a change: This mini-series replaces our home-grown
>> CPU list (first_cpu global and CPUState::next_cpu) with a QTAILQ.
>>
>> To avoid repeated QTAILQ_FOREACH(foo, cpus, node) I am proposing a wrapper
>> CPU_FOREACH(foo) to hide the implementation details, as requested by mst,
>> while keeping inline loops, as requested by Markus.
>>
>> The bulk of patch 1 is mechanical for/while loop conversions, except for:
>> exec.c: QTAILQ_INSERT_TAIL()
>> linux-user/syscall.c: QTAILQ_REMOVE() -- untested!
>
> Ping! The 1.7 merge window is approaching and Michael has ack'ed
> adopting the macros introduced in patch 1/2, but that patch has no
> review or test reports yet. Riku? Peter? Alex?
Applied to qom-cpu:
https://github.com/afaerber/qemu-cpu/commits/qom-cpu
Andreas
>
> Latest rebased version on qom-cpu-12 branch.
>
> Regards,
> Andreas
>
>> Having such a stable API will help with vCPU hot-unplug.
>>
>> Patch 2 is controversial: It drops qemu_for_each_cpu() on Markus' request.
>> I'm including it as RFC to showcase what exactly those suggestions involve.
>> Converting from for (cpu = ...) to CPU_FOREACH(cpu) is obviously much more
>> straightforward than converting between function and loop, therefore with
>> patch 1 in mind, it seems best to still use for loops, which since recently
>> no longer require CPUArchState, the original trigger for qemu_for_each_cpu().
>>
>> Available for testing at:
>> git://github.com/afaerber/qemu-cpu.git qom-cpu-12.v1
>> https://github.com/afaerber/qemu-cpu/commits/qom-cpu-12.v1
>>
>> Regards,
>> Andreas
>>
>> From CPUState part 10 v1:
>> * Instead of replacing every loop with qemu_for_each_cpu()
>> replace loops with CPU_FOREACH() macro, based on QTAILQ_FOREACH().
>>
>> Cc: Markus Armbruster <armbru@redhat.com>
>> Cc: Michael S. Tsirkin <mst@redhat.com>
>> Cc: Anthony Liguori <anthony@codemonkey.ws>
>> Cc: Blue Swirl <blauwirbel@gmail.com>
>> Cc: Aurélien Jarno <aurelien@aurel32.net>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Eduardo Habkost <ehabkost@redhat.com>
>> Cc: Igor Mammedov <imammedo@redhat.com>
>> Cc: Riku Voipio <riku.voipio@iki.fi>
>>
>> Andreas Färber (2):
>> cpu: Use QTAILQ for CPU list
>> cpu: Replace qemu_for_each_cpu()
>>
>> arch_init.c | 11 ++++-----
>> cpus.c | 58 ++++++++++++++++++++---------------------------
>> cputlb.c | 2 +-
>> dump.c | 10 ++++----
>> exec.c | 33 ++++++++-------------------
>> gdbstub.c | 14 ++++++------
>> hw/acpi/piix4.c | 20 ++++++++--------
>> hw/arm/boot.c | 2 +-
>> hw/i386/kvm/clock.c | 2 +-
>> hw/i386/kvmvapic.c | 2 +-
>> hw/i386/pc.c | 3 +--
>> hw/ppc/e500.c | 2 +-
>> hw/ppc/ppc.c | 2 +-
>> hw/ppc/spapr.c | 4 ++--
>> include/qom/cpu.h | 20 ++++++++--------
>> kvm-all.c | 8 +++----
>> linux-user/elfload.c | 2 +-
>> linux-user/main.c | 10 +++++---
>> linux-user/syscall.c | 17 ++------------
>> memory_mapping.c | 5 ++--
>> monitor.c | 2 +-
>> qom/cpu.c | 30 ++++++++----------------
>> target-i386/helper.c | 3 +--
>> target-i386/misc_helper.c | 2 +-
>> target-mips/op_helper.c | 10 ++++----
>> target-ppc/excp_helper.c | 2 +-
>> translate-all.c | 4 ++--
>> 27 files changed, 113 insertions(+), 167 deletions(-)
>>
>
>
--
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2013-08-30 14:42 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-30 16:55 [Qemu-devel] [PATCH qom-cpu for-next 0/2] QOM CPUState, part 12: CPU loops, revisited Andreas Färber
2013-07-30 16:55 ` [PATCH qom-cpu for-next 1/2] cpu: Use QTAILQ for CPU list Andreas Färber
2013-07-30 16:55 ` [Qemu-devel] " Andreas Färber
2013-08-21 14:12 ` Andreas Färber
2013-08-21 14:12 ` Andreas Färber
2013-08-21 14:36 ` Peter Maydell
2013-08-21 14:36 ` Peter Maydell
2013-08-21 16:32 ` Andreas Färber
2013-08-21 16:32 ` Andreas Färber
2013-07-30 16:55 ` [Qemu-devel] [RFC qom-cpu for-next 2/2] cpu: Replace qemu_for_each_cpu() Andreas Färber
2013-07-30 18:30 ` Michael S. Tsirkin
2013-08-14 11:49 ` [Qemu-devel] [PATCH qom-cpu for-next 0/2] QOM CPUState, part 12: CPU loops, revisited Andreas Färber
2013-08-30 14:42 ` Andreas Färber
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.