Il ven 14 mag 2021, 16:10 Emanuele Giuseppe Esposito <eesposit@redhat.com> ha scritto:
> I'm not sure I like it since callers may still need coarser grained
> locks to protect their own state or synchronize access to multiple
> items of data. Also, some callers may not need thread-safety.
>
> Can the caller to be responsible for locking instead (e.g. using
> CoMutex)?

Right now co-shared-resource is being used only by block-copy, so I
guess locking it from the caller or within the API won't really matter
in this case.

One possible idea on how to delegate this to the caller without adding
additional small lock/unlock in block-copy is to move co_get_from_shres
in block_copy_task_end, and calling it only when a boolean passed to
block_copy_task_end is true.

The patch below won't work because qemu_co_queue_wait would have to unlock the CoMutex; therefore you would have to pass it as an additional argument to co_get_from_shres.

Overall, neither co_get_from_shres not AioTaskPool should be fast paths, so using a local lock seems to produce the simplest API.

Paolo


Otherwise make b_c_task_end always call co_get_from_shres and then
include co_get_from_shres in block_copy_task_create, so that we always
add and in case remove (if error) in the shared resource.

Something like:

diff --git a/block/block-copy.c b/block/block-copy.c
index 3a447a7c3d..1e4914b0cb 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -233,6 +233,7 @@ static coroutine_fn BlockCopyTask
*block_copy_task_create(BlockCopyState *s,
      /* region is dirty, so no existent tasks possible in it */
      assert(!find_conflicting_task(s, offset, bytes));
      QLIST_INSERT_HEAD(&s->tasks, task, list);
+    co_get_from_shres(s->mem, task->bytes);
      qemu_co_mutex_unlock(&s->tasks_lock);

      return task;
@@ -269,6 +270,7 @@ static void coroutine_fn
block_copy_task_end(BlockCopyTask *task, int ret)
          bdrv_set_dirty_bitmap(task->s->copy_bitmap, task->offset,
task->bytes);
      }
      qemu_co_mutex_lock(&task->s->tasks_lock);
+    co_put_to_shres(task->s->mem, task->bytes);
      task->s->in_flight_bytes -= task->bytes;
      QLIST_REMOVE(task, list);
      progress_set_remaining(task->s->progress,
@@ -379,7 +381,6 @@ static coroutine_fn int
block_copy_task_run(AioTaskPool *pool,

      aio_task_pool_wait_slot(pool);
      if (aio_task_pool_status(pool) < 0) {
-        co_put_to_shres(task->s->mem, task->bytes);
          block_copy_task_end(task, -ECANCELED);
          g_free(task);
          return -ECANCELED;
@@ -498,7 +499,6 @@ static coroutine_fn int
block_copy_task_entry(AioTask *task)
      }
      qemu_mutex_unlock(&t->s->calls_lock);

-    co_put_to_shres(t->s->mem, t->bytes);
      block_copy_task_end(t, ret);

      return ret;
@@ -687,8 +687,6 @@ block_copy_dirty_clusters(BlockCopyCallState
*call_state)

          trace_block_copy_process(s, task->offset);

-        co_get_from_shres(s->mem, task->bytes);
-
          offset = task_end(task);
          bytes = end - offset;




>
>> diff --git a/util/qemu-co-shared-resource.c b/util/qemu-co-shared-resource.c
>> index 1c83cd9d29..c455d02a1e 100644
>> --- a/util/qemu-co-shared-resource.c
>> +++ b/util/qemu-co-shared-resource.c
>> @@ -32,6 +32,7 @@ struct SharedResource {
>>       uint64_t available;
>>   
>>       CoQueue queue;
>> +    QemuMutex lock;
>
> Please add a comment indicating what this lock protects.
>
> Thread safety should also be documented in the header file so API users
> know what to expect.

Will do, thanks.

Emanuele