From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from zeniv.linux.org.uk ([195.92.253.2]:56898 "EHLO ZenIV.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751469AbdCLBYd (ORCPT ); Sat, 11 Mar 2017 20:24:33 -0500 Date: Sun, 12 Mar 2017 01:24:15 +0000 From: Al Viro To: Eric Biggers Cc: linux-fsdevel@vger.kernel.org, David Howells , linux-kernel@vger.kernel.org, Eric Biggers Subject: Re: [PATCH v2] statx: optimize copy of struct statx to userspace Message-ID: <20170312012411.GN29622@ZenIV.linux.org.uk> References: <20170311214555.941-1-ebiggers3@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170311214555.941-1-ebiggers3@gmail.com> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Sat, Mar 11, 2017 at 01:45:55PM -0800, Eric Biggers wrote: > From: Eric Biggers > > I found that statx() was significantly slower than stat(). As a > microbenchmark, I compared 10,000,000 invocations of fstat() on a tmpfs > file to the same with statx() passed a NULL path: Umm... > + struct statx tmp; > + > + tmp.stx_mask = stat->result_mask; > + tmp.stx_blksize = stat->blksize; > + tmp.stx_attributes = stat->attributes; > + tmp.stx_nlink = stat->nlink; > + tmp.stx_uid = from_kuid_munged(current_user_ns(), stat->uid); > + tmp.stx_gid = from_kgid_munged(current_user_ns(), stat->gid); > + tmp.stx_mode = stat->mode; > + memset(tmp.__spare0, 0, sizeof(tmp.__spare0)); > + tmp.stx_ino = stat->ino; > + tmp.stx_size = stat->size; > + tmp.stx_blocks = stat->blocks; > + memset(tmp.__spare1, 0, sizeof(tmp.__spare1)); > + init_statx_timestamp(&tmp.stx_atime, &stat->atime); > + init_statx_timestamp(&tmp.stx_btime, &stat->btime); > + init_statx_timestamp(&tmp.stx_ctime, &stat->ctime); > + init_statx_timestamp(&tmp.stx_mtime, &stat->mtime); > + tmp.stx_rdev_major = MAJOR(stat->rdev); > + tmp.stx_rdev_minor = MINOR(stat->rdev); > + tmp.stx_dev_major = MAJOR(stat->dev); > + tmp.stx_dev_minor = MINOR(stat->dev); > + memset(tmp.__spare2, 0, sizeof(tmp.__spare2)); > + > + return copy_to_user(buffer, &tmp, sizeof(tmp)) ? -EFAULT : 0; That relies upon there being no padding in the damn structure. It's true and probably will be true on any target, but a) it's bloody well worth stating explicitly and b) struct statx tmp = {.stx_mask = stat->result_mask}; will get rid of those memset() you've got there by implicit zeroing of fields missing in partial structure initializer. Padding is *not* included into that, but you are relying upon having no padding anyway...