From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1434580AbdDZG0Z (ORCPT ); Wed, 26 Apr 2017 02:26:25 -0400 Received: from cmccmta1.chinamobile.com ([221.176.66.79]:18270 "EHLO cmccmta1.chinamobile.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1431286AbdDZG0S (ORCPT ); Wed, 26 Apr 2017 02:26:18 -0400 X-RM-TRANSID: 2ee459003d8574a-c712c X-RM-SPAM-FLAG: 00000000 X-RM-TRANSID: 2ee759003d84cca-a78ea From: lixiubo@cmss.chinamobile.com To: mchristi@redhat.com, nab@linux-iscsi.org Cc: agrover@redhat.com, iliastsi@arrikto.com, namei.unix@gmail.com, sheng@yasker.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-kernel@vger.kernel.org, Xiubo Li Subject: [PATCH v6 0/2] tcmu: Dynamic growing data area support Date: Wed, 26 Apr 2017 14:25:50 +0800 Message-Id: <1493187952-13125-1-git-send-email-lixiubo@cmss.chinamobile.com> X-Mailer: git-send-email 1.8.3.1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Xiubo Li Changed for V6: - Remove the tcmu_vma_close(). Since the unmap thread will do the same for it - The unmap thread will skip the busy devices. - Using and testing the V5 version 3 weeks and the V6 for about 1 week, all in a higher IOPS environment: * using fio and dd commands * using about 4 targets based user:rbd/user:file backend * set the global pool size to 512 * 1024 blocks * block_size, for 4K page size, the size is 2G. * each target here needs more than 1100 blocks. * fio: -iodepth 16 -thread -rw=[read write] -bs=[1M] -size=20G -numjobs=10 -runtime=1000 ... * restart the tcmu-runner at any time. Changed for V5: - Rebase to the newest target-pending repository. - Add as many comments as possbile to make the patch more readable. - Move tcmu_handle_completions() in timeout handler to unmap thread and then replace the spin lock with mutex lock(because the unmap_* or zap_* may goto sleep) to simplify the patch and the code. - Thanks very much for Mike's tips and suggestions. - Tested this for more than 3 days by: * using fio and dd commands * using about 1~5 targets * set the global pool size to [512 1024 2048 512 * 1024] blocks * block_size * each target here needs more than 450 blocks when running in my environments. * fio: -iodepth [1 2 4 8 16] -thread -rw=[read write] -bs=[1K 2K 3K 5K 7K 16K 64K 1M] -size=20G -numjobs=10 -runtime=1000 ... * in the tcmu-runner, try to touch blocks out of tcmu_cmds' iov[] manually * restart the tcmu-runner at any time. * in my environment for the low IOPS case: the read throughput goes from about 5200KB/s to 6700KB/s; the write throughput goes from about 3000KB/s to 3700KB/s. Xiubo Li (2): tcmu: Add dynamic growing data area feature support tcmu: Add global data block pool support drivers/target/target_core_user.c | 598 ++++++++++++++++++++++++++++++-------- 1 file changed, 469 insertions(+), 129 deletions(-) -- 1.8.3.1