From mboxrd@z Thu Jan 1 00:00:00 1970 From: Zhi Yong Wu Subject: Re: [Qemu-devel] [PATCH v4 3/3] The support for queue timer and throttling algorithm Date: Fri, 5 Aug 2011 10:48:11 +0800 Message-ID: References: <1312179955-23536-1-git-send-email-wuzhy@linux.vnet.ibm.com> <1312179955-23536-4-git-send-email-wuzhy@linux.vnet.ibm.com> <20110801203930.GN7358@us.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: Zhi Yong Wu , qemu-devel@nongnu.org, kwolf@redhat.com, aliguori@us.ibm.com, stefanha@linux.vnet.ibm.com, kvm@vger.kernel.org, mtosatti@redhat.com, luowenj@cn.ibm.com, raharper@us.ibm.com To: Ryan Harper Return-path: Received: from mail-gx0-f174.google.com ([209.85.161.174]:43708 "EHLO mail-gx0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755720Ab1HECsM convert rfc822-to-8bit (ORCPT ); Thu, 4 Aug 2011 22:48:12 -0400 Received: by gxk21 with SMTP id 21so1452612gxk.19 for ; Thu, 04 Aug 2011 19:48:11 -0700 (PDT) In-Reply-To: <20110801203930.GN7358@us.ibm.com> Sender: kvm-owner@vger.kernel.org List-ID: On Tue, Aug 2, 2011 at 4:39 AM, Ryan Harper wrote: > * Zhi Yong Wu [2011-08-01 01:32]: >> Note: >> =A0 =A0 =A0 1.) When bps/iops limits are specified to a small value = such as 511 bytes/s, this VM will hang up. We are considering how to ha= ndle this senario. >> =A0 =A0 =A0 2.) When "dd" command is issued in guest, if its option = bs is set to a large value such as "bs=3D1024K", the result speed will = slightly bigger than the limits. >> >> For these problems, if you have nice thought, pls let us know.:) >> >> Signed-off-by: Zhi Yong Wu >> --- >> =A0block.c =A0 =A0 | =A0302 ++++++++++++++++++++++++++++++++++++++++= +++++++++++++++++-- >> =A0block.h =A0 =A0 | =A0 =A01 - >> =A0block_int.h | =A0 29 ++++++ >> =A03 files changed, 323 insertions(+), 9 deletions(-) >> >> diff --git a/block.c b/block.c >> index 24a25d5..42763a3 100644 >> --- a/block.c >> +++ b/block.c >> @@ -29,6 +29,9 @@ >> =A0#include "module.h" >> =A0#include "qemu-objects.h" >> >> +#include "qemu-timer.h" >> +#include "block/blk-queue.h" >> + >> =A0#ifdef CONFIG_BSD >> =A0#include >> =A0#include >> @@ -58,6 +61,13 @@ static int bdrv_read_em(BlockDriverState *bs, int= 64_t sector_num, >> =A0static int bdrv_write_em(BlockDriverState *bs, int64_t sector_num= , >> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 const uint8_t *b= uf, int nb_sectors); >> >> +static bool bdrv_exceed_bps_limits(BlockDriverState *bs, int nb_sec= tors, >> + =A0 =A0 =A0 =A0bool is_write, double elapsed_time, uint64_t *wait)= ; >> +static bool bdrv_exceed_iops_limits(BlockDriverState *bs, bool is_w= rite, >> + =A0 =A0 =A0 =A0double elapsed_time, uint64_t *wait); >> +static bool bdrv_exceed_io_limits(BlockDriverState *bs, int nb_sect= ors, >> + =A0 =A0 =A0 =A0bool is_write, uint64_t *wait); >> + >> =A0static QTAILQ_HEAD(, BlockDriverState) bdrv_states =3D >> =A0 =A0 =A0QTAILQ_HEAD_INITIALIZER(bdrv_states); >> >> @@ -90,6 +100,20 @@ int is_windows_drive(const char *filename) >> =A0} >> =A0#endif >> >> +static int bdrv_io_limits_enable(BlockIOLimit *io_limits) >> +{ >> + =A0 =A0if ((io_limits->bps[0] =3D=3D 0) >> + =A0 =A0 =A0 =A0 && (io_limits->bps[1] =3D=3D 0) >> + =A0 =A0 =A0 =A0 && (io_limits->bps[2] =3D=3D 0) >> + =A0 =A0 =A0 =A0 && (io_limits->iops[0] =3D=3D 0) >> + =A0 =A0 =A0 =A0 && (io_limits->iops[1] =3D=3D 0) >> + =A0 =A0 =A0 =A0 && (io_limits->iops[2] =3D=3D 0)) { >> + =A0 =A0 =A0 =A0return 0; > > > I'd add a field to BlockIOLimit structure, and just do: > > =A0static int bdrv_io_limits_enabled(BlockIOLimit *io_limits) > =A0{ > =A0 =A0 =A0 return io_limist->enabled; > =A0} > > Update bdrv_set_io_limits() to do the original check you have, and if > one of the fields is set, update the enabled flag. > > We incur that logic on *every* request, so let's make it as cheap as > possible. Good point, it has been adopted in qmp/hmp patch. > >> + =A0 =A0} >> + >> + =A0 =A0return 1; >> +} >> + >> =A0/* check if the path starts with ":" */ >> =A0static int path_has_protocol(const char *path) >> =A0{ >> @@ -167,6 +191,28 @@ void path_combine(char *dest, int dest_size, >> =A0 =A0 =A0} >> =A0} >> >> +static void bdrv_block_timer(void *opaque) >> +{ >> + =A0 =A0BlockDriverState *bs =3D opaque; >> + =A0 =A0BlockQueue *queue =3D bs->block_queue; >> + >> + =A0 =A0while (!QTAILQ_EMPTY(&queue->requests)) { >> + =A0 =A0 =A0 =A0BlockIORequest *request =3D NULL; >> + =A0 =A0 =A0 =A0int ret =3D 0; >> + >> + =A0 =A0 =A0 =A0request =3D QTAILQ_FIRST(&queue->requests); >> + =A0 =A0 =A0 =A0QTAILQ_REMOVE(&queue->requests, request, entry); >> + >> + =A0 =A0 =A0 =A0ret =3D qemu_block_queue_handler(request); >> + =A0 =A0 =A0 =A0if (ret =3D=3D 0) { >> + =A0 =A0 =A0 =A0 =A0 =A0QTAILQ_INSERT_HEAD(&queue->requests, reques= t, entry); >> + =A0 =A0 =A0 =A0 =A0 =A0break; > > btw, I did some tracing and I never saw a request get re-added here. > Now, ideally, I think you were try to do the following: > > The request is coming from the queue, if we exceed our limits, then w= e'd > get back NULL from the handler and we'd requeue the request at the Right. > head... but that doesn't actually happen. It could take place. For example, if block req1 comes, it exceed the limits, the block timer is set to fire in 10ms; When block req2 comes, it also exceed this limits, the block timer is updated to 3ms; Let us assume that no block request is coming. When the firing time arrives, req2 will be serviced; but when next enqueued request req1 is got from block queue, it could still exceed the limits, it will need to be enqueued. right? > > Rather, if we exceed our limits, we invoke qemu_block_queue_enqueue() > again, which allocates a whole new request and puts it at the tail. > I think we want to update the throttling logic in readv/writev to ret= urn > NULL if bs->req_from_queue =3D=3D true and we exceed the limits. =A0T= hen this No, NULL indicate that the block emulation layer fails to start readv/writev request. > logic will do the right thing by inserting the request back to the he= ad. We've used a request pool, Maybe we can release it to this pool when the request need to be freed. > > >> + =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0qemu_free(request); > > > See my email to blk-queue.c on how we can eliminate free'ing the requ= est > here. After i go to office next week, i will check it. > >> + =A0 =A0} >> +} >> + >> =A0void bdrv_register(BlockDriver *bdrv) >> =A0{ >> =A0 =A0 =A0if (!bdrv->bdrv_aio_readv) { >> @@ -642,6 +688,19 @@ int bdrv_open(BlockDriverState *bs, const char = *filename, int flags, >> =A0 =A0 =A0 =A0 =A0 =A0 =A0bs->change_cb(bs->change_opaque, CHANGE_M= EDIA); >> =A0 =A0 =A0} >> >> + =A0 =A0/* throttling disk I/O limits */ >> + =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0bs->req_from_queue =3D false; >> + =A0 =A0 =A0 =A0bs->block_queue =A0 =A0=3D qemu_new_block_queue(); >> + =A0 =A0 =A0 =A0bs->block_timer =A0 =A0=3D qemu_new_timer_ns(vm_clo= ck, bdrv_block_timer, bs); >> + >> + =A0 =A0 =A0 =A0bs->slice_start[0] =3D qemu_get_clock_ns(vm_clock); >> + =A0 =A0 =A0 =A0bs->slice_start[1] =3D qemu_get_clock_ns(vm_clock); >> + >> + =A0 =A0 =A0 =A0bs->slice_end[0] =A0 =3D qemu_get_clock_ns(vm_clock= ) + BLOCK_IO_SLICE_TIME; >> + =A0 =A0 =A0 =A0bs->slice_end[1] =A0 =3D qemu_get_clock_ns(vm_clock= ) + BLOCK_IO_SLICE_TIME; >> + =A0 =A0} >> + >> =A0 =A0 =A0return 0; >> >> =A0unlink_and_fail: >> @@ -680,6 +739,16 @@ void bdrv_close(BlockDriverState *bs) >> =A0 =A0 =A0 =A0 =A0if (bs->change_cb) >> =A0 =A0 =A0 =A0 =A0 =A0 =A0bs->change_cb(bs->change_opaque, CHANGE_M= EDIA); >> =A0 =A0 =A0} >> + >> + =A0 =A0/* throttling disk I/O limits */ >> + =A0 =A0if (bs->block_queue) { >> + =A0 =A0 =A0 =A0qemu_del_block_queue(bs->block_queue); >> + =A0 =A0} >> + >> + =A0 =A0if (bs->block_timer) { >> + =A0 =A0 =A0 =A0qemu_del_timer(bs->block_timer); >> + =A0 =A0 =A0 =A0qemu_free_timer(bs->block_timer); >> + =A0 =A0} >> =A0} >> >> =A0void bdrv_close_all(void) >> @@ -1312,6 +1381,14 @@ void bdrv_get_geometry_hint(BlockDriverState = *bs, >> =A0 =A0 =A0*psecs =3D bs->secs; >> =A0} >> >> +/* throttling disk io limits */ >> +void bdrv_set_io_limits(BlockDriverState *bs, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0BlockIOLimi= t *io_limits) >> +{ >> + =A0 =A0memset(&bs->io_limits, 0, sizeof(BlockIOLimit)); >> + =A0 =A0bs->io_limits =3D *io_limits; >> +} >> + >> =A0/* Recognize floppy formats */ >> =A0typedef struct FDFormat { >> =A0 =A0 =A0FDriveType drive; >> @@ -2111,6 +2188,165 @@ char *bdrv_snapshot_dump(char *buf, int buf_= size, QEMUSnapshotInfo *sn) >> =A0 =A0 =A0return buf; >> =A0} >> >> +static bool bdrv_exceed_bps_limits(BlockDriverState *bs, int nb_sec= tors, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 bool is_write, double elapsed_time= , uint64_t *wait) { >> + =A0 =A0uint64_t bps_limit =3D 0; >> + =A0 =A0double =A0 bytes_limit, bytes_disp, bytes_res; >> + =A0 =A0double =A0 slice_time, wait_time; >> + >> + =A0 =A0if (bs->io_limits.bps[BLOCK_IO_LIMIT_TOTAL]) { >> + =A0 =A0 =A0 =A0bps_limit =3D bs->io_limits.bps[BLOCK_IO_LIMIT_TOTA= L]; >> + =A0 =A0} else if (bs->io_limits.bps[is_write]) { >> + =A0 =A0 =A0 =A0bps_limit =3D bs->io_limits.bps[is_write]; >> + =A0 =A0} else { >> + =A0 =A0 =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0 =A0 =A0*wait =3D 0; >> + =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0return false; >> + =A0 =A0} >> + >> + =A0 =A0slice_time =3D bs->slice_end[is_write] - bs->slice_start[is= _write]; >> + =A0 =A0slice_time /=3D (BLOCK_IO_SLICE_TIME * 10.0); >> + =A0 =A0bytes_limit =3D bps_limit * slice_time; >> + =A0 =A0bytes_disp =A0=3D bs->io_disps.bytes[is_write]; >> + =A0 =A0if (bs->io_limits.bps[BLOCK_IO_LIMIT_TOTAL]) { >> + =A0 =A0 =A0 =A0bytes_disp +=3D bs->io_disps.bytes[!is_write]; >> + =A0 =A0} >> + >> + =A0 =A0bytes_res =A0 =3D (unsigned) nb_sectors * BDRV_SECTOR_SIZE; >> + >> + =A0 =A0if (bytes_disp + bytes_res <=3D bytes_limit) { >> + =A0 =A0 =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0 =A0 =A0*wait =3D 0; >> + =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0return false; >> + =A0 =A0} >> + >> + =A0 =A0/* Calc approx time to dispatch */ >> + =A0 =A0wait_time =3D (bytes_disp + bytes_res) / bps_limit - elapse= d_time; >> + >> + =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0*wait =3D wait_time * BLOCK_IO_SLICE_TIME * 10; >> + =A0 =A0} >> + >> + =A0 =A0return true; >> +} >> + >> +static bool bdrv_exceed_iops_limits(BlockDriverState *bs, bool is_w= rite, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 double ela= psed_time, uint64_t *wait) { >> + =A0 =A0uint64_t iops_limit =3D 0; >> + =A0 =A0double =A0 ios_limit, ios_disp; >> + =A0 =A0double =A0 slice_time, wait_time; >> + >> + =A0 =A0if (bs->io_limits.iops[BLOCK_IO_LIMIT_TOTAL]) { >> + =A0 =A0 =A0 =A0iops_limit =3D bs->io_limits.iops[BLOCK_IO_LIMIT_TO= TAL]; >> + =A0 =A0} else if (bs->io_limits.iops[is_write]) { >> + =A0 =A0 =A0 =A0iops_limit =3D bs->io_limits.iops[is_write]; >> + =A0 =A0} else { >> + =A0 =A0 =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0 =A0 =A0*wait =3D 0; >> + =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0return false; >> + =A0 =A0} >> + >> + =A0 =A0slice_time =3D bs->slice_end[is_write] - bs->slice_start[is= _write]; >> + =A0 =A0slice_time /=3D (BLOCK_IO_SLICE_TIME * 10.0); >> + =A0 =A0ios_limit =A0=3D iops_limit * slice_time; >> + =A0 =A0ios_disp =A0 =3D bs->io_disps.ios[is_write]; >> + =A0 =A0if (bs->io_limits.iops[BLOCK_IO_LIMIT_TOTAL]) { >> + =A0 =A0 =A0 =A0ios_disp +=3D bs->io_disps.ios[!is_write]; >> + =A0 =A0} >> + >> + =A0 =A0if (ios_disp + 1 <=3D ios_limit) { >> + =A0 =A0 =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0 =A0 =A0*wait =3D 0; >> + =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0return false; >> + =A0 =A0} >> + >> + =A0 =A0/* Calc approx time to dispatch */ >> + =A0 =A0wait_time =3D (ios_disp + 1) / iops_limit; >> + =A0 =A0if (wait_time > elapsed_time) { >> + =A0 =A0 =A0 =A0wait_time =3D wait_time - elapsed_time; >> + =A0 =A0} else { >> + =A0 =A0 =A0 =A0wait_time =3D 0; >> + =A0 =A0} >> + >> + =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0*wait =3D wait_time * BLOCK_IO_SLICE_TIME * 10; >> + =A0 =A0} >> + >> + =A0 =A0return true; >> +} >> + >> +static bool bdrv_exceed_io_limits(BlockDriverState *bs, int nb_sect= ors, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 bool is_write,= uint64_t *wait) { >> + =A0 =A0int64_t =A0real_time, real_slice; >> + =A0 =A0uint64_t bps_wait =3D 0, iops_wait =3D 0, max_wait; >> + =A0 =A0double =A0 elapsed_time; >> + =A0 =A0int =A0 =A0 =A0bps_ret, iops_ret; >> + >> + =A0 =A0real_time =3D qemu_get_clock_ns(vm_clock); >> + =A0 =A0real_slice =3D bs->slice_end[is_write] - bs->slice_start[is= _write]; >> + =A0 =A0if ((bs->slice_start[is_write] < real_time) >> + =A0 =A0 =A0 =A0&& (bs->slice_end[is_write] > real_time)) { >> + =A0 =A0 =A0 =A0bs->slice_end[is_write] =A0 =3D real_time + BLOCK_I= O_SLICE_TIME; >> + =A0 =A0} else { >> + =A0 =A0 =A0 =A0bs->slice_start[is_write] =A0 =A0 =3D real_time; >> + =A0 =A0 =A0 =A0bs->slice_end[is_write] =A0 =A0 =A0 =3D real_time += BLOCK_IO_SLICE_TIME; >> + >> + =A0 =A0 =A0 =A0bs->io_disps.bytes[is_write] =A0=3D 0; >> + =A0 =A0 =A0 =A0bs->io_disps.bytes[!is_write] =3D 0; >> + >> + =A0 =A0 =A0 =A0bs->io_disps.ios[is_write] =A0 =A0=3D 0; >> + =A0 =A0 =A0 =A0bs->io_disps.ios[!is_write] =A0 =3D 0; >> + =A0 =A0} >> + >> + =A0 =A0/* If a limit was exceeded, immediately queue this request = */ >> + =A0 =A0if ((bs->req_from_queue =3D=3D false) >> + =A0 =A0 =A0 =A0&& !QTAILQ_EMPTY(&bs->block_queue->requests)) { >> + =A0 =A0 =A0 =A0if (bs->io_limits.bps[BLOCK_IO_LIMIT_TOTAL] >> + =A0 =A0 =A0 =A0 =A0 =A0|| bs->io_limits.bps[is_write] || bs->io_li= mits.iops[is_write] >> + =A0 =A0 =A0 =A0 =A0 =A0|| bs->io_limits.iops[BLOCK_IO_LIMIT_TOTAL]= ) { >> + =A0 =A0 =A0 =A0 =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*wait =3D 0; >> + =A0 =A0 =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0 =A0 =A0return true; >> + =A0 =A0 =A0 =A0} >> + =A0 =A0} >> + >> + =A0 =A0elapsed_time =A0=3D real_time - bs->slice_start[is_write]; >> + =A0 =A0elapsed_time =A0/=3D (BLOCK_IO_SLICE_TIME * 10.0); >> + >> + =A0 =A0bps_ret =A0=3D bdrv_exceed_bps_limits(bs, nb_sectors, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0is_write, elapsed_time, &bps_wait); >> + =A0 =A0iops_ret =3D bdrv_exceed_iops_limits(bs, is_write, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0elapsed_time, &iops_wait); >> + =A0 =A0if (bps_ret || iops_ret) { >> + =A0 =A0 =A0 =A0max_wait =3D bps_wait > iops_wait ? bps_wait : iops= _wait; >> + =A0 =A0 =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0 =A0 =A0*wait =3D max_wait; >> + =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0real_time =3D qemu_get_clock_ns(vm_clock); >> + =A0 =A0 =A0 =A0if (bs->slice_end[is_write] < real_time + max_wait)= { >> + =A0 =A0 =A0 =A0 =A0 =A0bs->slice_end[is_write] =3D real_time + max= _wait; >> + =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0return true; >> + =A0 =A0} >> + >> + =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0*wait =3D 0; >> + =A0 =A0} >> + >> + =A0 =A0return false; >> +} >> >> =A0/**************************************************************/ >> =A0/* async I/Os */ >> @@ -2121,13 +2357,28 @@ BlockDriverAIOCB *bdrv_aio_readv(BlockDriver= State *bs, int64_t sector_num, >> =A0{ >> =A0 =A0 =A0BlockDriver *drv =3D bs->drv; >> =A0 =A0 =A0BlockDriverAIOCB *ret; >> + =A0 =A0uint64_t wait_time =3D 0; >> >> =A0 =A0 =A0trace_bdrv_aio_readv(bs, sector_num, nb_sectors, opaque); >> >> - =A0 =A0if (!drv) >> - =A0 =A0 =A0 =A0return NULL; >> - =A0 =A0if (bdrv_check_request(bs, sector_num, nb_sectors)) >> + =A0 =A0if (!drv || bdrv_check_request(bs, sector_num, nb_sectors))= { >> + =A0 =A0 =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0 =A0 =A0bs->req_from_queue =3D false; >> + =A0 =A0 =A0 =A0} >> =A0 =A0 =A0 =A0 =A0return NULL; >> + =A0 =A0} >> + >> + =A0 =A0/* throttling disk read I/O */ >> + =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0if (bdrv_exceed_io_limits(bs, nb_sectors, false, &w= ait_time)) { > > We need something like: > =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (bs->request_from_queue) { > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0return NULL; > =A0 =A0 =A0 =A0 =A0 =A0 =A0 } > > This allows the re-queue logic in bdrv_block_timer() to work correctl= y > =A0 rather than creating an entirely new request at the tail. Please see above comments. NULL indicates to fail to start the request for upper layer. > >> + =A0 =A0 =A0 =A0 =A0 =A0ret =3D qemu_block_queue_enqueue(bs->block_= queue, bs, bdrv_aio_readv, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0sector_= num, qiov, nb_sectors, cb, opaque); >> + =A0 =A0 =A0 =A0 =A0 =A0qemu_mod_timer(bs->block_timer, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 wait_time + qemu_get_c= lock_ns(vm_clock)); >> + =A0 =A0 =A0 =A0 =A0 =A0bs->req_from_queue =3D false; >> + =A0 =A0 =A0 =A0 =A0 =A0return ret; >> + =A0 =A0 =A0 =A0} >> + =A0 =A0} >> >> =A0 =A0 =A0ret =3D drv->bdrv_aio_readv(bs, sector_num, qiov, nb_sect= ors, >> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0cb, o= paque); >> @@ -2136,6 +2387,16 @@ BlockDriverAIOCB *bdrv_aio_readv(BlockDriverS= tate *bs, int64_t sector_num, >> =A0 =A0 =A0 /* Update stats even though technically transfer has not= happened. */ >> =A0 =A0 =A0 bs->rd_bytes +=3D (unsigned) nb_sectors * BDRV_SECTOR_SI= ZE; >> =A0 =A0 =A0 bs->rd_ops ++; >> + >> + =A0 =A0 =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0 =A0 =A0bs->io_disps.bytes[BLOCK_IO_LIMIT_READ] +=3D >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0(unsign= ed) nb_sectors * BDRV_SECTOR_SIZE; >> + =A0 =A0 =A0 =A0 =A0 =A0bs->io_disps.ios[BLOCK_IO_LIMIT_READ]++; >> + =A0 =A0 =A0 =A0} >> + =A0 =A0} >> + >> + =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0bs->req_from_queue =3D false; >> =A0 =A0 =A0} >> >> =A0 =A0 =A0return ret; >> @@ -2184,15 +2445,18 @@ BlockDriverAIOCB *bdrv_aio_writev(BlockDrive= rState *bs, int64_t sector_num, >> =A0 =A0 =A0BlockDriver *drv =3D bs->drv; >> =A0 =A0 =A0BlockDriverAIOCB *ret; >> =A0 =A0 =A0BlockCompleteData *blk_cb_data; >> + =A0 =A0uint64_t wait_time =3D 0; >> >> =A0 =A0 =A0trace_bdrv_aio_writev(bs, sector_num, nb_sectors, opaque)= ; >> >> - =A0 =A0if (!drv) >> - =A0 =A0 =A0 =A0return NULL; >> - =A0 =A0if (bs->read_only) >> - =A0 =A0 =A0 =A0return NULL; >> - =A0 =A0if (bdrv_check_request(bs, sector_num, nb_sectors)) >> + =A0 =A0if (!drv || bs->read_only >> + =A0 =A0 =A0 =A0|| bdrv_check_request(bs, sector_num, nb_sectors)) = { >> + =A0 =A0 =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0 =A0 =A0bs->req_from_queue =3D false; >> + =A0 =A0 =A0 =A0} >> + >> =A0 =A0 =A0 =A0 =A0return NULL; >> + =A0 =A0} >> >> =A0 =A0 =A0if (bs->dirty_bitmap) { >> =A0 =A0 =A0 =A0 =A0blk_cb_data =3D blk_dirty_cb_alloc(bs, sector_num= , nb_sectors, cb, >> @@ -2201,6 +2465,18 @@ BlockDriverAIOCB *bdrv_aio_writev(BlockDriver= State *bs, int64_t sector_num, >> =A0 =A0 =A0 =A0 =A0opaque =3D blk_cb_data; >> =A0 =A0 =A0} >> >> + =A0 =A0/* throttling disk write I/O */ >> + =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0if (bdrv_exceed_io_limits(bs, nb_sectors, true, &wa= it_time)) { > > same change as writev needed here. pls see above comments. > >> + =A0 =A0 =A0 =A0 =A0 =A0ret =3D qemu_block_queue_enqueue(bs->block_= queue, bs, bdrv_aio_writev, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= sector_num, qiov, nb_sectors, cb, opaque); >> + =A0 =A0 =A0 =A0 =A0 =A0qemu_mod_timer(bs->block_timer, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= wait_time + qemu_get_clock_ns(vm_clock)); >> + =A0 =A0 =A0 =A0 =A0 =A0bs->req_from_queue =3D false; >> + =A0 =A0 =A0 =A0 =A0 =A0return ret; >> + =A0 =A0 =A0 =A0} >> + =A0 =A0} >> + >> =A0 =A0 =A0ret =3D drv->bdrv_aio_writev(bs, sector_num, qiov, nb_sec= tors, >> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 cb, = opaque); >> >> @@ -2211,6 +2487,16 @@ BlockDriverAIOCB *bdrv_aio_writev(BlockDriver= State *bs, int64_t sector_num, >> =A0 =A0 =A0 =A0 =A0if (bs->wr_highest_sector < sector_num + nb_secto= rs - 1) { >> =A0 =A0 =A0 =A0 =A0 =A0 =A0bs->wr_highest_sector =3D sector_num + nb= _sectors - 1; >> =A0 =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0 =A0 =A0bs->io_disps.bytes[BLOCK_IO_LIMIT_WRITE] +=3D >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 (unsig= ned) nb_sectors * BDRV_SECTOR_SIZE; >> + =A0 =A0 =A0 =A0 =A0 =A0bs->io_disps.ios[BLOCK_IO_LIMIT_WRITE]++; >> + =A0 =A0 =A0 =A0} >> + =A0 =A0} >> + >> + =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0bs->req_from_queue =3D false; >> =A0 =A0 =A0} >> >> =A0 =A0 =A0return ret; >> diff --git a/block.h b/block.h >> index 859d1d9..f0dac62 100644 >> --- a/block.h >> +++ b/block.h >> @@ -97,7 +97,6 @@ int bdrv_change_backing_file(BlockDriverState *bs, >> =A0 =A0 =A0const char *backing_file, const char *backing_fmt); >> =A0void bdrv_register(BlockDriver *bdrv); >> >> - >> =A0typedef struct BdrvCheckResult { >> =A0 =A0 =A0int corruptions; >> =A0 =A0 =A0int leaks; >> diff --git a/block_int.h b/block_int.h >> index 1e265d2..1ca826b 100644 >> --- a/block_int.h >> +++ b/block_int.h >> @@ -27,10 +27,17 @@ >> =A0#include "block.h" >> =A0#include "qemu-option.h" >> =A0#include "qemu-queue.h" >> +#include "block/blk-queue.h" >> >> =A0#define BLOCK_FLAG_ENCRYPT =A0 1 >> =A0#define BLOCK_FLAG_COMPAT6 =A0 4 >> >> +#define BLOCK_IO_LIMIT_READ =A0 =A0 0 >> +#define BLOCK_IO_LIMIT_WRITE =A0 =A01 >> +#define BLOCK_IO_LIMIT_TOTAL =A0 =A02 >> + >> +#define BLOCK_IO_SLICE_TIME =A0 =A0 100000000 >> + >> =A0#define BLOCK_OPT_SIZE =A0 =A0 =A0 =A0 =A0"size" >> =A0#define BLOCK_OPT_ENCRYPT =A0 =A0 =A0 "encryption" >> =A0#define BLOCK_OPT_COMPAT6 =A0 =A0 =A0 "compat6" >> @@ -46,6 +53,16 @@ typedef struct AIOPool { >> =A0 =A0 =A0BlockDriverAIOCB *free_aiocb; >> =A0} AIOPool; >> >> +typedef struct BlockIOLimit { >> + =A0 =A0uint64_t bps[3]; >> + =A0 =A0uint64_t iops[3]; >> +} BlockIOLimit; >> + >> +typedef struct BlockIODisp { >> + =A0 =A0uint64_t bytes[2]; >> + =A0 =A0uint64_t ios[2]; >> +} BlockIODisp; >> + >> =A0struct BlockDriver { >> =A0 =A0 =A0const char *format_name; >> =A0 =A0 =A0int instance_size; >> @@ -175,6 +192,15 @@ struct BlockDriverState { >> >> =A0 =A0 =A0void *sync_aiocb; >> >> + =A0 =A0/* the time for latest disk I/O */ >> + =A0 =A0int64_t slice_start[2]; >> + =A0 =A0int64_t slice_end[2]; >> + =A0 =A0BlockIOLimit io_limits; >> + =A0 =A0BlockIODisp =A0io_disps; >> + =A0 =A0BlockQueue =A0 *block_queue; >> + =A0 =A0QEMUTimer =A0 =A0*block_timer; >> + =A0 =A0bool =A0 =A0 =A0 =A0 req_from_queue; >> + >> =A0 =A0 =A0/* I/O stats (display with "info blockstats"). */ >> =A0 =A0 =A0uint64_t rd_bytes; >> =A0 =A0 =A0uint64_t wr_bytes; >> @@ -222,6 +248,9 @@ void qemu_aio_release(void *p); >> >> =A0void *qemu_blockalign(BlockDriverState *bs, size_t size); >> >> +void bdrv_set_io_limits(BlockDriverState *bs, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0BlockIOLimi= t *io_limits); >> + >> =A0#ifdef _WIN32 >> =A0int is_windows_drive(const char *filename); >> =A0#endif >> -- >> 1.7.2.3 >> > > -- > Ryan Harper > Software Engineer; Linux Technology Center > IBM Corp., Austin, Tx > ryanh@us.ibm.com > --=20 Regards, Zhi Yong Wu From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:37011) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QpASM-0000nl-89 for qemu-devel@nongnu.org; Thu, 04 Aug 2011 22:48:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QpASK-0004An-9c for qemu-devel@nongnu.org; Thu, 04 Aug 2011 22:48:14 -0400 Received: from mail-yi0-f45.google.com ([209.85.218.45]:37010) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QpASK-0004Ad-2d for qemu-devel@nongnu.org; Thu, 04 Aug 2011 22:48:12 -0400 Received: by yih10 with SMTP id 10so131624yih.4 for ; Thu, 04 Aug 2011 19:48:11 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <20110801203930.GN7358@us.ibm.com> References: <1312179955-23536-1-git-send-email-wuzhy@linux.vnet.ibm.com> <1312179955-23536-4-git-send-email-wuzhy@linux.vnet.ibm.com> <20110801203930.GN7358@us.ibm.com> Date: Fri, 5 Aug 2011 10:48:11 +0800 Message-ID: From: Zhi Yong Wu Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [PATCH v4 3/3] The support for queue timer and throttling algorithm List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Ryan Harper Cc: kwolf@redhat.com, aliguori@us.ibm.com, stefanha@linux.vnet.ibm.com, kvm@vger.kernel.org, mtosatti@redhat.com, qemu-devel@nongnu.org, Zhi Yong Wu , luowenj@cn.ibm.com, raharper@us.ibm.com On Tue, Aug 2, 2011 at 4:39 AM, Ryan Harper wrote: > * Zhi Yong Wu [2011-08-01 01:32]: >> Note: >> =A0 =A0 =A0 1.) When bps/iops limits are specified to a small value such= as 511 bytes/s, this VM will hang up. We are considering how to handle thi= s senario. >> =A0 =A0 =A0 2.) When "dd" command is issued in guest, if its option bs i= s set to a large value such as "bs=3D1024K", the result speed will slightly= bigger than the limits. >> >> For these problems, if you have nice thought, pls let us know.:) >> >> Signed-off-by: Zhi Yong Wu >> --- >> =A0block.c =A0 =A0 | =A0302 ++++++++++++++++++++++++++++++++++++++++++++= +++++++++++++-- >> =A0block.h =A0 =A0 | =A0 =A01 - >> =A0block_int.h | =A0 29 ++++++ >> =A03 files changed, 323 insertions(+), 9 deletions(-) >> >> diff --git a/block.c b/block.c >> index 24a25d5..42763a3 100644 >> --- a/block.c >> +++ b/block.c >> @@ -29,6 +29,9 @@ >> =A0#include "module.h" >> =A0#include "qemu-objects.h" >> >> +#include "qemu-timer.h" >> +#include "block/blk-queue.h" >> + >> =A0#ifdef CONFIG_BSD >> =A0#include >> =A0#include >> @@ -58,6 +61,13 @@ static int bdrv_read_em(BlockDriverState *bs, int64_t= sector_num, >> =A0static int bdrv_write_em(BlockDriverState *bs, int64_t sector_num, >> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 const uint8_t *buf, = int nb_sectors); >> >> +static bool bdrv_exceed_bps_limits(BlockDriverState *bs, int nb_sectors= , >> + =A0 =A0 =A0 =A0bool is_write, double elapsed_time, uint64_t *wait); >> +static bool bdrv_exceed_iops_limits(BlockDriverState *bs, bool is_write= , >> + =A0 =A0 =A0 =A0double elapsed_time, uint64_t *wait); >> +static bool bdrv_exceed_io_limits(BlockDriverState *bs, int nb_sectors, >> + =A0 =A0 =A0 =A0bool is_write, uint64_t *wait); >> + >> =A0static QTAILQ_HEAD(, BlockDriverState) bdrv_states =3D >> =A0 =A0 =A0QTAILQ_HEAD_INITIALIZER(bdrv_states); >> >> @@ -90,6 +100,20 @@ int is_windows_drive(const char *filename) >> =A0} >> =A0#endif >> >> +static int bdrv_io_limits_enable(BlockIOLimit *io_limits) >> +{ >> + =A0 =A0if ((io_limits->bps[0] =3D=3D 0) >> + =A0 =A0 =A0 =A0 && (io_limits->bps[1] =3D=3D 0) >> + =A0 =A0 =A0 =A0 && (io_limits->bps[2] =3D=3D 0) >> + =A0 =A0 =A0 =A0 && (io_limits->iops[0] =3D=3D 0) >> + =A0 =A0 =A0 =A0 && (io_limits->iops[1] =3D=3D 0) >> + =A0 =A0 =A0 =A0 && (io_limits->iops[2] =3D=3D 0)) { >> + =A0 =A0 =A0 =A0return 0; > > > I'd add a field to BlockIOLimit structure, and just do: > > =A0static int bdrv_io_limits_enabled(BlockIOLimit *io_limits) > =A0{ > =A0 =A0 =A0 return io_limist->enabled; > =A0} > > Update bdrv_set_io_limits() to do the original check you have, and if > one of the fields is set, update the enabled flag. > > We incur that logic on *every* request, so let's make it as cheap as > possible. Good point, it has been adopted in qmp/hmp patch. > >> + =A0 =A0} >> + >> + =A0 =A0return 1; >> +} >> + >> =A0/* check if the path starts with ":" */ >> =A0static int path_has_protocol(const char *path) >> =A0{ >> @@ -167,6 +191,28 @@ void path_combine(char *dest, int dest_size, >> =A0 =A0 =A0} >> =A0} >> >> +static void bdrv_block_timer(void *opaque) >> +{ >> + =A0 =A0BlockDriverState *bs =3D opaque; >> + =A0 =A0BlockQueue *queue =3D bs->block_queue; >> + >> + =A0 =A0while (!QTAILQ_EMPTY(&queue->requests)) { >> + =A0 =A0 =A0 =A0BlockIORequest *request =3D NULL; >> + =A0 =A0 =A0 =A0int ret =3D 0; >> + >> + =A0 =A0 =A0 =A0request =3D QTAILQ_FIRST(&queue->requests); >> + =A0 =A0 =A0 =A0QTAILQ_REMOVE(&queue->requests, request, entry); >> + >> + =A0 =A0 =A0 =A0ret =3D qemu_block_queue_handler(request); >> + =A0 =A0 =A0 =A0if (ret =3D=3D 0) { >> + =A0 =A0 =A0 =A0 =A0 =A0QTAILQ_INSERT_HEAD(&queue->requests, request, e= ntry); >> + =A0 =A0 =A0 =A0 =A0 =A0break; > > btw, I did some tracing and I never saw a request get re-added here. > Now, ideally, I think you were try to do the following: > > The request is coming from the queue, if we exceed our limits, then we'd > get back NULL from the handler and we'd requeue the request at the Right. > head... but that doesn't actually happen. It could take place. For example, if block req1 comes, it exceed the limits, the block timer is set to fire in 10ms; When block req2 comes, it also exceed this limits, the block timer is updated to 3ms; Let us assume that no block request is coming. When the firing time arrives, req2 will be serviced; but when next enqueued request req1 is got from block queue, it could still exceed the limits, it will need to be enqueued. right? > > Rather, if we exceed our limits, we invoke qemu_block_queue_enqueue() > again, which allocates a whole new request and puts it at the tail. > I think we want to update the throttling logic in readv/writev to return > NULL if bs->req_from_queue =3D=3D true and we exceed the limits. =A0Then = this No, NULL indicate that the block emulation layer fails to start readv/writev request. > logic will do the right thing by inserting the request back to the head. We've used a request pool, Maybe we can release it to this pool when the request need to be freed. > > >> + =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0qemu_free(request); > > > See my email to blk-queue.c on how we can eliminate free'ing the request > here. After i go to office next week, i will check it. > >> + =A0 =A0} >> +} >> + >> =A0void bdrv_register(BlockDriver *bdrv) >> =A0{ >> =A0 =A0 =A0if (!bdrv->bdrv_aio_readv) { >> @@ -642,6 +688,19 @@ int bdrv_open(BlockDriverState *bs, const char *fil= ename, int flags, >> =A0 =A0 =A0 =A0 =A0 =A0 =A0bs->change_cb(bs->change_opaque, CHANGE_MEDIA= ); >> =A0 =A0 =A0} >> >> + =A0 =A0/* throttling disk I/O limits */ >> + =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0bs->req_from_queue =3D false; >> + =A0 =A0 =A0 =A0bs->block_queue =A0 =A0=3D qemu_new_block_queue(); >> + =A0 =A0 =A0 =A0bs->block_timer =A0 =A0=3D qemu_new_timer_ns(vm_clock, = bdrv_block_timer, bs); >> + >> + =A0 =A0 =A0 =A0bs->slice_start[0] =3D qemu_get_clock_ns(vm_clock); >> + =A0 =A0 =A0 =A0bs->slice_start[1] =3D qemu_get_clock_ns(vm_clock); >> + >> + =A0 =A0 =A0 =A0bs->slice_end[0] =A0 =3D qemu_get_clock_ns(vm_clock) + = BLOCK_IO_SLICE_TIME; >> + =A0 =A0 =A0 =A0bs->slice_end[1] =A0 =3D qemu_get_clock_ns(vm_clock) + = BLOCK_IO_SLICE_TIME; >> + =A0 =A0} >> + >> =A0 =A0 =A0return 0; >> >> =A0unlink_and_fail: >> @@ -680,6 +739,16 @@ void bdrv_close(BlockDriverState *bs) >> =A0 =A0 =A0 =A0 =A0if (bs->change_cb) >> =A0 =A0 =A0 =A0 =A0 =A0 =A0bs->change_cb(bs->change_opaque, CHANGE_MEDIA= ); >> =A0 =A0 =A0} >> + >> + =A0 =A0/* throttling disk I/O limits */ >> + =A0 =A0if (bs->block_queue) { >> + =A0 =A0 =A0 =A0qemu_del_block_queue(bs->block_queue); >> + =A0 =A0} >> + >> + =A0 =A0if (bs->block_timer) { >> + =A0 =A0 =A0 =A0qemu_del_timer(bs->block_timer); >> + =A0 =A0 =A0 =A0qemu_free_timer(bs->block_timer); >> + =A0 =A0} >> =A0} >> >> =A0void bdrv_close_all(void) >> @@ -1312,6 +1381,14 @@ void bdrv_get_geometry_hint(BlockDriverState *bs, >> =A0 =A0 =A0*psecs =3D bs->secs; >> =A0} >> >> +/* throttling disk io limits */ >> +void bdrv_set_io_limits(BlockDriverState *bs, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0BlockIOLimit *i= o_limits) >> +{ >> + =A0 =A0memset(&bs->io_limits, 0, sizeof(BlockIOLimit)); >> + =A0 =A0bs->io_limits =3D *io_limits; >> +} >> + >> =A0/* Recognize floppy formats */ >> =A0typedef struct FDFormat { >> =A0 =A0 =A0FDriveType drive; >> @@ -2111,6 +2188,165 @@ char *bdrv_snapshot_dump(char *buf, int buf_size= , QEMUSnapshotInfo *sn) >> =A0 =A0 =A0return buf; >> =A0} >> >> +static bool bdrv_exceed_bps_limits(BlockDriverState *bs, int nb_sectors= , >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 bool is_write, double elapsed_time, ui= nt64_t *wait) { >> + =A0 =A0uint64_t bps_limit =3D 0; >> + =A0 =A0double =A0 bytes_limit, bytes_disp, bytes_res; >> + =A0 =A0double =A0 slice_time, wait_time; >> + >> + =A0 =A0if (bs->io_limits.bps[BLOCK_IO_LIMIT_TOTAL]) { >> + =A0 =A0 =A0 =A0bps_limit =3D bs->io_limits.bps[BLOCK_IO_LIMIT_TOTAL]; >> + =A0 =A0} else if (bs->io_limits.bps[is_write]) { >> + =A0 =A0 =A0 =A0bps_limit =3D bs->io_limits.bps[is_write]; >> + =A0 =A0} else { >> + =A0 =A0 =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0 =A0 =A0*wait =3D 0; >> + =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0return false; >> + =A0 =A0} >> + >> + =A0 =A0slice_time =3D bs->slice_end[is_write] - bs->slice_start[is_wri= te]; >> + =A0 =A0slice_time /=3D (BLOCK_IO_SLICE_TIME * 10.0); >> + =A0 =A0bytes_limit =3D bps_limit * slice_time; >> + =A0 =A0bytes_disp =A0=3D bs->io_disps.bytes[is_write]; >> + =A0 =A0if (bs->io_limits.bps[BLOCK_IO_LIMIT_TOTAL]) { >> + =A0 =A0 =A0 =A0bytes_disp +=3D bs->io_disps.bytes[!is_write]; >> + =A0 =A0} >> + >> + =A0 =A0bytes_res =A0 =3D (unsigned) nb_sectors * BDRV_SECTOR_SIZE; >> + >> + =A0 =A0if (bytes_disp + bytes_res <=3D bytes_limit) { >> + =A0 =A0 =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0 =A0 =A0*wait =3D 0; >> + =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0return false; >> + =A0 =A0} >> + >> + =A0 =A0/* Calc approx time to dispatch */ >> + =A0 =A0wait_time =3D (bytes_disp + bytes_res) / bps_limit - elapsed_ti= me; >> + >> + =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0*wait =3D wait_time * BLOCK_IO_SLICE_TIME * 10; >> + =A0 =A0} >> + >> + =A0 =A0return true; >> +} >> + >> +static bool bdrv_exceed_iops_limits(BlockDriverState *bs, bool is_write= , >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 double elapsed= _time, uint64_t *wait) { >> + =A0 =A0uint64_t iops_limit =3D 0; >> + =A0 =A0double =A0 ios_limit, ios_disp; >> + =A0 =A0double =A0 slice_time, wait_time; >> + >> + =A0 =A0if (bs->io_limits.iops[BLOCK_IO_LIMIT_TOTAL]) { >> + =A0 =A0 =A0 =A0iops_limit =3D bs->io_limits.iops[BLOCK_IO_LIMIT_TOTAL]= ; >> + =A0 =A0} else if (bs->io_limits.iops[is_write]) { >> + =A0 =A0 =A0 =A0iops_limit =3D bs->io_limits.iops[is_write]; >> + =A0 =A0} else { >> + =A0 =A0 =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0 =A0 =A0*wait =3D 0; >> + =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0return false; >> + =A0 =A0} >> + >> + =A0 =A0slice_time =3D bs->slice_end[is_write] - bs->slice_start[is_wri= te]; >> + =A0 =A0slice_time /=3D (BLOCK_IO_SLICE_TIME * 10.0); >> + =A0 =A0ios_limit =A0=3D iops_limit * slice_time; >> + =A0 =A0ios_disp =A0 =3D bs->io_disps.ios[is_write]; >> + =A0 =A0if (bs->io_limits.iops[BLOCK_IO_LIMIT_TOTAL]) { >> + =A0 =A0 =A0 =A0ios_disp +=3D bs->io_disps.ios[!is_write]; >> + =A0 =A0} >> + >> + =A0 =A0if (ios_disp + 1 <=3D ios_limit) { >> + =A0 =A0 =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0 =A0 =A0*wait =3D 0; >> + =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0return false; >> + =A0 =A0} >> + >> + =A0 =A0/* Calc approx time to dispatch */ >> + =A0 =A0wait_time =3D (ios_disp + 1) / iops_limit; >> + =A0 =A0if (wait_time > elapsed_time) { >> + =A0 =A0 =A0 =A0wait_time =3D wait_time - elapsed_time; >> + =A0 =A0} else { >> + =A0 =A0 =A0 =A0wait_time =3D 0; >> + =A0 =A0} >> + >> + =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0*wait =3D wait_time * BLOCK_IO_SLICE_TIME * 10; >> + =A0 =A0} >> + >> + =A0 =A0return true; >> +} >> + >> +static bool bdrv_exceed_io_limits(BlockDriverState *bs, int nb_sectors, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 bool is_write, uin= t64_t *wait) { >> + =A0 =A0int64_t =A0real_time, real_slice; >> + =A0 =A0uint64_t bps_wait =3D 0, iops_wait =3D 0, max_wait; >> + =A0 =A0double =A0 elapsed_time; >> + =A0 =A0int =A0 =A0 =A0bps_ret, iops_ret; >> + >> + =A0 =A0real_time =3D qemu_get_clock_ns(vm_clock); >> + =A0 =A0real_slice =3D bs->slice_end[is_write] - bs->slice_start[is_wri= te]; >> + =A0 =A0if ((bs->slice_start[is_write] < real_time) >> + =A0 =A0 =A0 =A0&& (bs->slice_end[is_write] > real_time)) { >> + =A0 =A0 =A0 =A0bs->slice_end[is_write] =A0 =3D real_time + BLOCK_IO_SL= ICE_TIME; >> + =A0 =A0} else { >> + =A0 =A0 =A0 =A0bs->slice_start[is_write] =A0 =A0 =3D real_time; >> + =A0 =A0 =A0 =A0bs->slice_end[is_write] =A0 =A0 =A0 =3D real_time + BLO= CK_IO_SLICE_TIME; >> + >> + =A0 =A0 =A0 =A0bs->io_disps.bytes[is_write] =A0=3D 0; >> + =A0 =A0 =A0 =A0bs->io_disps.bytes[!is_write] =3D 0; >> + >> + =A0 =A0 =A0 =A0bs->io_disps.ios[is_write] =A0 =A0=3D 0; >> + =A0 =A0 =A0 =A0bs->io_disps.ios[!is_write] =A0 =3D 0; >> + =A0 =A0} >> + >> + =A0 =A0/* If a limit was exceeded, immediately queue this request */ >> + =A0 =A0if ((bs->req_from_queue =3D=3D false) >> + =A0 =A0 =A0 =A0&& !QTAILQ_EMPTY(&bs->block_queue->requests)) { >> + =A0 =A0 =A0 =A0if (bs->io_limits.bps[BLOCK_IO_LIMIT_TOTAL] >> + =A0 =A0 =A0 =A0 =A0 =A0|| bs->io_limits.bps[is_write] || bs->io_limits= .iops[is_write] >> + =A0 =A0 =A0 =A0 =A0 =A0|| bs->io_limits.iops[BLOCK_IO_LIMIT_TOTAL]) { >> + =A0 =A0 =A0 =A0 =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*wait =3D 0; >> + =A0 =A0 =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0 =A0 =A0return true; >> + =A0 =A0 =A0 =A0} >> + =A0 =A0} >> + >> + =A0 =A0elapsed_time =A0=3D real_time - bs->slice_start[is_write]; >> + =A0 =A0elapsed_time =A0/=3D (BLOCK_IO_SLICE_TIME * 10.0); >> + >> + =A0 =A0bps_ret =A0=3D bdrv_exceed_bps_limits(bs, nb_sectors, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0is_write, elapsed_time, &bps_wait); >> + =A0 =A0iops_ret =3D bdrv_exceed_iops_limits(bs, is_write, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0elapsed_time, &iops_wait); >> + =A0 =A0if (bps_ret || iops_ret) { >> + =A0 =A0 =A0 =A0max_wait =3D bps_wait > iops_wait ? bps_wait : iops_wai= t; >> + =A0 =A0 =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0 =A0 =A0*wait =3D max_wait; >> + =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0real_time =3D qemu_get_clock_ns(vm_clock); >> + =A0 =A0 =A0 =A0if (bs->slice_end[is_write] < real_time + max_wait) { >> + =A0 =A0 =A0 =A0 =A0 =A0bs->slice_end[is_write] =3D real_time + max_wai= t; >> + =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0return true; >> + =A0 =A0} >> + >> + =A0 =A0if (wait) { >> + =A0 =A0 =A0 =A0*wait =3D 0; >> + =A0 =A0} >> + >> + =A0 =A0return false; >> +} >> >> =A0/**************************************************************/ >> =A0/* async I/Os */ >> @@ -2121,13 +2357,28 @@ BlockDriverAIOCB *bdrv_aio_readv(BlockDriverStat= e *bs, int64_t sector_num, >> =A0{ >> =A0 =A0 =A0BlockDriver *drv =3D bs->drv; >> =A0 =A0 =A0BlockDriverAIOCB *ret; >> + =A0 =A0uint64_t wait_time =3D 0; >> >> =A0 =A0 =A0trace_bdrv_aio_readv(bs, sector_num, nb_sectors, opaque); >> >> - =A0 =A0if (!drv) >> - =A0 =A0 =A0 =A0return NULL; >> - =A0 =A0if (bdrv_check_request(bs, sector_num, nb_sectors)) >> + =A0 =A0if (!drv || bdrv_check_request(bs, sector_num, nb_sectors)) { >> + =A0 =A0 =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0 =A0 =A0bs->req_from_queue =3D false; >> + =A0 =A0 =A0 =A0} >> =A0 =A0 =A0 =A0 =A0return NULL; >> + =A0 =A0} >> + >> + =A0 =A0/* throttling disk read I/O */ >> + =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0if (bdrv_exceed_io_limits(bs, nb_sectors, false, &wait_= time)) { > > We need something like: > =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (bs->request_from_queue) { > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0return NULL; > =A0 =A0 =A0 =A0 =A0 =A0 =A0 } > > This allows the re-queue logic in bdrv_block_timer() to work correctly > =A0 rather than creating an entirely new request at the tail. Please see above comments. NULL indicates to fail to start the request for upper layer. > >> + =A0 =A0 =A0 =A0 =A0 =A0ret =3D qemu_block_queue_enqueue(bs->block_queu= e, bs, bdrv_aio_readv, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0sector_num,= qiov, nb_sectors, cb, opaque); >> + =A0 =A0 =A0 =A0 =A0 =A0qemu_mod_timer(bs->block_timer, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 wait_time + qemu_get_clock= _ns(vm_clock)); >> + =A0 =A0 =A0 =A0 =A0 =A0bs->req_from_queue =3D false; >> + =A0 =A0 =A0 =A0 =A0 =A0return ret; >> + =A0 =A0 =A0 =A0} >> + =A0 =A0} >> >> =A0 =A0 =A0ret =3D drv->bdrv_aio_readv(bs, sector_num, qiov, nb_sectors, >> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0cb, opaqu= e); >> @@ -2136,6 +2387,16 @@ BlockDriverAIOCB *bdrv_aio_readv(BlockDriverState= *bs, int64_t sector_num, >> =A0 =A0 =A0 /* Update stats even though technically transfer has not hap= pened. */ >> =A0 =A0 =A0 bs->rd_bytes +=3D (unsigned) nb_sectors * BDRV_SECTOR_SIZE; >> =A0 =A0 =A0 bs->rd_ops ++; >> + >> + =A0 =A0 =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0 =A0 =A0bs->io_disps.bytes[BLOCK_IO_LIMIT_READ] +=3D >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0(unsigned) = nb_sectors * BDRV_SECTOR_SIZE; >> + =A0 =A0 =A0 =A0 =A0 =A0bs->io_disps.ios[BLOCK_IO_LIMIT_READ]++; >> + =A0 =A0 =A0 =A0} >> + =A0 =A0} >> + >> + =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0bs->req_from_queue =3D false; >> =A0 =A0 =A0} >> >> =A0 =A0 =A0return ret; >> @@ -2184,15 +2445,18 @@ BlockDriverAIOCB *bdrv_aio_writev(BlockDriverSta= te *bs, int64_t sector_num, >> =A0 =A0 =A0BlockDriver *drv =3D bs->drv; >> =A0 =A0 =A0BlockDriverAIOCB *ret; >> =A0 =A0 =A0BlockCompleteData *blk_cb_data; >> + =A0 =A0uint64_t wait_time =3D 0; >> >> =A0 =A0 =A0trace_bdrv_aio_writev(bs, sector_num, nb_sectors, opaque); >> >> - =A0 =A0if (!drv) >> - =A0 =A0 =A0 =A0return NULL; >> - =A0 =A0if (bs->read_only) >> - =A0 =A0 =A0 =A0return NULL; >> - =A0 =A0if (bdrv_check_request(bs, sector_num, nb_sectors)) >> + =A0 =A0if (!drv || bs->read_only >> + =A0 =A0 =A0 =A0|| bdrv_check_request(bs, sector_num, nb_sectors)) { >> + =A0 =A0 =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0 =A0 =A0bs->req_from_queue =3D false; >> + =A0 =A0 =A0 =A0} >> + >> =A0 =A0 =A0 =A0 =A0return NULL; >> + =A0 =A0} >> >> =A0 =A0 =A0if (bs->dirty_bitmap) { >> =A0 =A0 =A0 =A0 =A0blk_cb_data =3D blk_dirty_cb_alloc(bs, sector_num, nb= _sectors, cb, >> @@ -2201,6 +2465,18 @@ BlockDriverAIOCB *bdrv_aio_writev(BlockDriverStat= e *bs, int64_t sector_num, >> =A0 =A0 =A0 =A0 =A0opaque =3D blk_cb_data; >> =A0 =A0 =A0} >> >> + =A0 =A0/* throttling disk write I/O */ >> + =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0if (bdrv_exceed_io_limits(bs, nb_sectors, true, &wait_t= ime)) { > > same change as writev needed here. pls see above comments. > >> + =A0 =A0 =A0 =A0 =A0 =A0ret =3D qemu_block_queue_enqueue(bs->block_queu= e, bs, bdrv_aio_writev, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0sec= tor_num, qiov, nb_sectors, cb, opaque); >> + =A0 =A0 =A0 =A0 =A0 =A0qemu_mod_timer(bs->block_timer, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0wai= t_time + qemu_get_clock_ns(vm_clock)); >> + =A0 =A0 =A0 =A0 =A0 =A0bs->req_from_queue =3D false; >> + =A0 =A0 =A0 =A0 =A0 =A0return ret; >> + =A0 =A0 =A0 =A0} >> + =A0 =A0} >> + >> =A0 =A0 =A0ret =3D drv->bdrv_aio_writev(bs, sector_num, qiov, nb_sectors= , >> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 cb, opaq= ue); >> >> @@ -2211,6 +2487,16 @@ BlockDriverAIOCB *bdrv_aio_writev(BlockDriverStat= e *bs, int64_t sector_num, >> =A0 =A0 =A0 =A0 =A0if (bs->wr_highest_sector < sector_num + nb_sectors -= 1) { >> =A0 =A0 =A0 =A0 =A0 =A0 =A0bs->wr_highest_sector =3D sector_num + nb_sec= tors - 1; >> =A0 =A0 =A0 =A0 =A0} >> + >> + =A0 =A0 =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0 =A0 =A0bs->io_disps.bytes[BLOCK_IO_LIMIT_WRITE] +=3D >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 (unsigned)= nb_sectors * BDRV_SECTOR_SIZE; >> + =A0 =A0 =A0 =A0 =A0 =A0bs->io_disps.ios[BLOCK_IO_LIMIT_WRITE]++; >> + =A0 =A0 =A0 =A0} >> + =A0 =A0} >> + >> + =A0 =A0if (bdrv_io_limits_enable(&bs->io_limits)) { >> + =A0 =A0 =A0 =A0bs->req_from_queue =3D false; >> =A0 =A0 =A0} >> >> =A0 =A0 =A0return ret; >> diff --git a/block.h b/block.h >> index 859d1d9..f0dac62 100644 >> --- a/block.h >> +++ b/block.h >> @@ -97,7 +97,6 @@ int bdrv_change_backing_file(BlockDriverState *bs, >> =A0 =A0 =A0const char *backing_file, const char *backing_fmt); >> =A0void bdrv_register(BlockDriver *bdrv); >> >> - >> =A0typedef struct BdrvCheckResult { >> =A0 =A0 =A0int corruptions; >> =A0 =A0 =A0int leaks; >> diff --git a/block_int.h b/block_int.h >> index 1e265d2..1ca826b 100644 >> --- a/block_int.h >> +++ b/block_int.h >> @@ -27,10 +27,17 @@ >> =A0#include "block.h" >> =A0#include "qemu-option.h" >> =A0#include "qemu-queue.h" >> +#include "block/blk-queue.h" >> >> =A0#define BLOCK_FLAG_ENCRYPT =A0 1 >> =A0#define BLOCK_FLAG_COMPAT6 =A0 4 >> >> +#define BLOCK_IO_LIMIT_READ =A0 =A0 0 >> +#define BLOCK_IO_LIMIT_WRITE =A0 =A01 >> +#define BLOCK_IO_LIMIT_TOTAL =A0 =A02 >> + >> +#define BLOCK_IO_SLICE_TIME =A0 =A0 100000000 >> + >> =A0#define BLOCK_OPT_SIZE =A0 =A0 =A0 =A0 =A0"size" >> =A0#define BLOCK_OPT_ENCRYPT =A0 =A0 =A0 "encryption" >> =A0#define BLOCK_OPT_COMPAT6 =A0 =A0 =A0 "compat6" >> @@ -46,6 +53,16 @@ typedef struct AIOPool { >> =A0 =A0 =A0BlockDriverAIOCB *free_aiocb; >> =A0} AIOPool; >> >> +typedef struct BlockIOLimit { >> + =A0 =A0uint64_t bps[3]; >> + =A0 =A0uint64_t iops[3]; >> +} BlockIOLimit; >> + >> +typedef struct BlockIODisp { >> + =A0 =A0uint64_t bytes[2]; >> + =A0 =A0uint64_t ios[2]; >> +} BlockIODisp; >> + >> =A0struct BlockDriver { >> =A0 =A0 =A0const char *format_name; >> =A0 =A0 =A0int instance_size; >> @@ -175,6 +192,15 @@ struct BlockDriverState { >> >> =A0 =A0 =A0void *sync_aiocb; >> >> + =A0 =A0/* the time for latest disk I/O */ >> + =A0 =A0int64_t slice_start[2]; >> + =A0 =A0int64_t slice_end[2]; >> + =A0 =A0BlockIOLimit io_limits; >> + =A0 =A0BlockIODisp =A0io_disps; >> + =A0 =A0BlockQueue =A0 *block_queue; >> + =A0 =A0QEMUTimer =A0 =A0*block_timer; >> + =A0 =A0bool =A0 =A0 =A0 =A0 req_from_queue; >> + >> =A0 =A0 =A0/* I/O stats (display with "info blockstats"). */ >> =A0 =A0 =A0uint64_t rd_bytes; >> =A0 =A0 =A0uint64_t wr_bytes; >> @@ -222,6 +248,9 @@ void qemu_aio_release(void *p); >> >> =A0void *qemu_blockalign(BlockDriverState *bs, size_t size); >> >> +void bdrv_set_io_limits(BlockDriverState *bs, >> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0BlockIOLimit *i= o_limits); >> + >> =A0#ifdef _WIN32 >> =A0int is_windows_drive(const char *filename); >> =A0#endif >> -- >> 1.7.2.3 >> > > -- > Ryan Harper > Software Engineer; Linux Technology Center > IBM Corp., Austin, Tx > ryanh@us.ibm.com > --=20 Regards, Zhi Yong Wu