From mboxrd@z Thu Jan 1 00:00:00 1970 From: Damien Le Moal Subject: Re: [PATCHv5 00/14] dm-zoned: metadata version 2 Date: Thu, 14 May 2020 00:55:10 +0000 Message-ID: References: <20200508090332.40716-1-hare@suse.de> <2553e593-795d-6aed-f983-e990a283e2ff@suse.de> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: "Martin K. Petersen" Cc: Bob Liu , "dm-devel@redhat.com" , Mike Snitzer List-Id: dm-devel.ids On 2020/05/14 9:22, Martin K. Petersen wrote: > > Damien, > >> Any idea why the io_opt limit is not set to the physical block size >> when the drive does not report an optimal transfer length ? Would it >> be bad to set that value instead of leaving it to 0 ? > > The original intent was that io_opt was a weak heuristic for something > being a RAID device. Regular disk drives didn't report it. These days > that distinction probably isn't relevant. > > However, before we entertain departing from the historic io_opt > behavior, I am a bit puzzled by the fact that you have a device that > reports io_opt as 512 bytes. What kind of device performs best when each > I/O is limited to a single logical block? > Indeed. It is an NVMe M.2 consumer grade SSD. Nothing fancy. If you look at nvme/host/core.c nvme_update_disk_info(), you will see that io_opt is set to the block size... This is probably abusing this limit. So I guess the most elegant fix may be to have nvme stop doing that ? -- Damien Le Moal Western Digital Research