On 02/11/2014 11:33 PM, Chunyan Liu wrote: > qed.c: replace QEMUOptionParameter with QemuOpts > > Signed-off-by: Dong Xu Wang > Signed-off-by: Chunyan Liu > --- > block/qed.c | 89 +++++++++++++++++++++++++++++------------------------------ > block/qed.h | 3 +- > 2 files changed, 45 insertions(+), 47 deletions(-) > > + cluster_size = qemu_opt_get_size_del(opts, > + BLOCK_OPT_CLUSTER_SIZE, > + QED_DEFAULT_CLUSTER_SIZE); > + table_size = qemu_opt_get_size_del(opts, BLOCK_OPT_TABLE_SIZE, > + QED_DEFAULT_TABLE_SIZE); > > + { > + .name = BLOCK_OPT_CLUSTER_SIZE, > + .type = QEMU_OPT_SIZE, > + .help = "Cluster size (in bytes)", > + .def_value_str = stringify(QED_DEFAULT_CLUSTER_SIZE) > + }, > + { > + .name = BLOCK_OPT_TABLE_SIZE, > + .type = QEMU_OPT_SIZE, > + .help = "L1/L2 table size (in clusters)" > + }, Why does cluster size list a default, but table size does not? > +++ b/block/qed.h > @@ -43,7 +43,7 @@ > * > * All fields are little-endian on disk. > */ > - > +#define QED_DEFAULT_CLUSTER_SIZE 65536 > enum { > QED_MAGIC = 'Q' | 'E' << 8 | 'D' << 16 | '\0' << 24, > > @@ -69,7 +69,6 @@ enum { > */ > QED_MIN_CLUSTER_SIZE = 4 * 1024, /* in bytes */ > QED_MAX_CLUSTER_SIZE = 64 * 1024 * 1024, > - QED_DEFAULT_CLUSTER_SIZE = 64 * 1024, Why this change? I actually prefer enums over #defines, because they behave nicer in gdb. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org